SCIENTIFIC-LINUX-USERS Archives

January 2021

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Yasha Karant <[log in to unmask]>
Reply To:
Yasha Karant <[log in to unmask]>
Date:
Mon, 25 Jan 2021 10:34:30 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (107 lines)
Part of the issues about the ATT or BSD init systems had to do with 
"local control" and with captive markets -- the latter a deliberate 
choice by the profit-controllers to the engineers and implementers. 
This is the old interplay between proprietary (often controlled by 
"intellectual property" because the entity has sufficient patent 
attorneys to gain such "property" that can then be used to prohibit the 
development of an innovation to maintain revenue generation) and open 
standards.  The same reason that most threaded fasteners on vehicle 
manufacturer A and B are readily available and interchangeable if the 
same choice of thread, grade, form, etc., were used, whereas engines are 
proprietary.

I fully agree that a full security audit would be valuable, an audit 
that needs to be kept current.  Nonetheless, "lines of code" is a well 
established empirical metric, given the approximate number of execution 
("run time") software defects in syntactically correct source code based 
on the lines of code in the source.  I am not defending "lines of code", 
merely repeating what is a standard "rule of thumb".

My bigger concern is that SystemD is no longer used on an isolated 
system in a segregated environment (NB: that term has nothing to do with 
socio-political contexts -- please do not misinterpret it as have some 
non-CSE colleagues when hearing of that, "race conditions", and other 
CSE specific terminology).  The larger the distributed (and integrated 
or inter-dependent) the environment is, particularly over the public 
Internet (even with VPNs or the like), the greater the risk of 
compromise through vulnerabilities.  Hardening systems is not easy, and 
there are as yet no definite algorithms (let alone implementations) to 
detect vulnerabilities pre-exploit.  Obviously, methods to control 
memory leaks in certain programming languages, etc., help -- but unless 
source code is available, and verification of correctness of the 
compiler (binary or "byte-code") output, there is no guarantee that such 
have been implement.  SystemD is open -- so in principle the coding 
issue could be addressed, but such is not the case for closed systems, 
no source code available.

Were the init systems more resilient?  These were never designed for the 
current wide area network platforms and environments such as "cloud 
computing" -- thus it is very unlikely that these perform better.  The 
issue comes back to the bloat in SystemD overseeing, in some sense, 
"everything".  As such, it is a possible single point of failure, or 
exploit.

However, lacking data and the person power to both accumulate and 
understand such data, this discussion is more speculative than empirical 
-- "philosophy", not "science".  If a major vulnerability is exploited 
through SystemD (as recently was revealed for a proprietary distributed 
update environment, not SystemD), the consequences will affect more than 
just the community of SL.

On 1/25/21 10:05 AM, Lamar Owen wrote:
> On 1/25/21 12:04 PM, Yasha Karant wrote:
>> The question is:  what mechanism?  The reality today for Linux systems 
>> as deployed at scale mostly is SystemD.  The question -- a question 
>> that goes well beyond what started as an exchange about EL 8 -- is 
>> what goes forward?  SystemD as it currently stands is too delicate and 
>> too vulnerable to compromise, either within itself or in terms of the 
>> processes/subsystems it "controls", despite the large scale deployment 
>> of SystemD.  ...
> 
> This statement begs some proof (preferably a formal code audit) of the 
> stated opinion that systemd is too 'delicate' and vulnerable to 
> compromise.  Anecdotal evidence or counting LoC and saying 'more LoC = 
> automatically more vulnerable' need not apply.  Of course, all code is 
> vulnerable, but the implication is that systemd is by nature more 
> vulnerable because $reason where $reason is something other than a 
> formal audit.
> 
>> I asked a question to which I have not seen an answer:  does a SystemD 
>> configuration (plain text files in the SystemD design) from two 
>> similar hardware platforms but different Linux distros (say, EL and 
>> LTS) interoperate, or require significant rewriting to produce the 
>> "same results"?  In other words, are the valuable concepts of 
>> portability and re-usability (do not reinvent the wheel, another 
>> engineering turn of phrase) met in practice with SystemD? 
> 
> The systemd unit files are more portable than old initscripts, in my 
> experience.  The determining factors will be whether the distributions' 
> engineers pick the same names for the services started by the unit file 
> and if the paths to executables are the same or not.  The main 
> differences here are the same as the differences in the locations of 
> files between the major branches of the Linux filesystem hierarchy; 
> Debian and derivatives will be different from Red Hat and derivatives, 
> to pick the two top examples.
> 
> Old initscripts were and are highly dependent upon the functions sourced 
> from the distribution's function library for initscripts, as well as 
> paths and daemon/service name; chkconfig metadata differences; and, of 
> course, they are executing as root in the system shell, and shell 
> quoting and escaping syntax becomes critical (the initscript for an 
> autossh instance, for instance, with say a half dozen reverse tunnels; I 
> have a few of those around here).  I wrote a few for PostgreSQL for use 
> on several different RPM-based systems; there was quite a variety, and 
> SuSE did things differently from Red Hat which did things differently 
> from TurboLinux (one of the targets of my packaging), and others did 
> things yet more differently.  It's possible to write initscripts to be 
> very portable, but it is harder than writing a unit file that can be 
> portable, as far as I can see.  But I do always reserve the right to be 
> wrong.
> 
> In practice a unit file from an upstream project, especially if the 
> project uses /opt/$progname or /usr/local/{bin|lib}, will be very 
> portable across distributions.  This I have experienced; a single unit 
> file can pretty easily be written to work across all systemd 
> distributions unless it needs some distribution-specific daemon/service 
> or feature.

ATOM RSS1 RSS2