27-09-2016, 11:33 #1
[EN} Linux kernel unsafe at any clock speed
Ars reports from the Linux Security Summit—and finds much work that needs to be done.
Unsafe at any clock speed: Linux kernel security needs a rethink
The Linux kernel today faces an unprecedented safety crisis. Much like when Ralph Nader famously told the American public that their cars were "unsafe at any speed" back in 1965, numerous security developers told the 2016 Linux Security Summit in Toronto that the operating system needs a total rethink to keep it fit for purpose.
No longer the niche concern of years past, Linux today underpins the server farms that run the cloud, more than a billion Android phones, and not to mention the coming tsunami of grossly insecure devices that will be hitched to the Internet of Things. Today's world runs on Linux, and the security of its kernel is a single point of failure that will affect the safety and well-being of almost every human being on the planet in one way or another.
"Cars were designed to run but not to fail," Kees Cook, head of the Linux Kernel Self Protection Project, and a Google employee working on the future of IoT security, said at the summit. "Very comfortable while you're going down the road, but as soon as you crashed, everybody died."
"That's not acceptable anymore," he added, "and in a similar fashion the Linux kernel needs to deal with attacks in a manner where it actually is expecting them and actually handles gracefully in some fashion the fact that it's being attacked."
Jeffrey Vander Stoep, a software engineer on the Android security team at Google, echoed Cook’s message: "This kind of hearkens back to last year's keynote speech when [Konstantin “Kai” Ryabitsev] compared computer safety with the car industry years ago. We need more and we need better safety features, and with it in mind this may cause inconvenience for developers, we still need them."
For his part, Kai, a senior systems administrator at the Linux Foundation, who was unable to attend this year’s summit, is pleased that this car safety analogy is finding traction.
“We approach security today as though we are still living in the world of the 1990s and 2000s, computers in a data centre managed by knowledgeable people,” he told Ars. But, he pointed out, most computers today—laptops, smartphones, IoT devices—are not managed and secured by IT professionals.
“For the cases where computers are not well protected in the hands of end-users who are not IT professionals, and who do not have any recourse to IT professional help, we need to design systems that proactively protect them,” he said. “We have to change the way we approach this dramatically, the same way the vehicle manufacturers in the 1970s did.”
This is, however, easier said than done.
Killing bug classes, not political dissidents
The clear consensus at the Linux Security Summit was that squashing bugs is a losing strategy. Many deployed devices running Linux will never receive security updates, and patching a security hole in the upstream kernel does nothing to ensure the safety of an IoT device that could be in use for a decade and may forever be ignored by the manufacturer.
Even devices that do receive patches may see long gaps between public bug discovery and a patch being applied. Cook gave the example of an Internet-connected door lock that an end-user might well use for 15 years or more. Such devices are likely to receive sporadic security patches, if at all.
Worse, the average lifetime of a critical security bug in the Linux kernel, from introduction during a code commit to public discovery and having a patch issued, averages three years or more. According to Cook’s analysis, critical and high-severity security bugs in the upstream kernel have lifespans from 3.3 to 6.4 years between commit and discovery.
"The question I get a lot is 'well isn't this just theoretical?'" he said. "No-one's actually finding these bugs to begin with, so there's no window of opportunity. And that's demonstrably false.”
Nation-state attackers are watching every commit, looking for an opening, he said, and "people are finding these bugs sometimes immediately when they're introduced."
He went on: "This seems to be a big thing that people for some reason just can't accept mentally. You know, like 'well I have no open bugs in my bug tracker, everything's fine.'"
How, then, can the kernel proactively defend itself against bugs that have not yet been reported—or even implemented?
The answer, said Cook, could be a matter of life and death for some people: "If you're a dissident, an activist somewhere in the world, and you're getting spied on, your life is literally at risk because of these bugs. As we move forward, we need devices that can protect themselves."
Protecting a world in which critical infrastructure runs Linux—not to mention protecting journalists and political dissidents—begins with protecting the kernel. The way to do that is to focus on squashing entire classes of bugs, so that a single undiscovered bug would not be exploitable, even on a future device running an ancient kernel.
Further, since successful attacks today often require chaining multiple exploits together, finding ways to break the exploit chain is a critical goal.
Kernel drivers suck
However that's hard to do when the vast majority of kernel bugs come from vendor drivers, not the upstream Linux kernel, Stoep said.
"Android does in fact inherit bugs from the upstream kernel," he said, "but our data shows that most of Android's kernel security vulnerabilities live in device drivers."
And, he explained, many more are introduced by manufacturers, meaning that securing the Linux kernel against bugs in code over which upstream has no control becomes the challenge.
"[Kernel] maintainers say 'bugs you didn't inherit from upstream are not upstream's problem,' but I think the reality is that this is what most Linux systems look like, and it's not limited to Android devices," he said. "Kernel defence will protect both code that comes from upstream as well as out-of-tree vulnerabilities. That's a really important point."
He was quick to add that he was not calling out any particular vendor for poor security practices. As he put it, to audience chuckles, “they’re really all doing poorly."
The bug stops here
While the technical challenges the Linux kernel faces in protecting itself against zero-days are “incredibly complex,” Cook said that the politics of submitting patches upstream can be even more challenging.
Coming across as a consummate diplomat, both in his talk and in person, he gently chided the buck-passing over how kernel security issues are discovered, fixed, and deployed.
"I hear a lot of blame-shifting of where this problem needs to be solved," he told the audience. "Even if upstream says 'oh sure we found that bug, we fixed it,' what kernel version was it fixed in? Did it end up in a stable release? Did a vendor backport it? Did the carrier for the phone take that update from the vendor and push it onto phones?"
He went on: "The idea is to build in the protection technologies from the start, so that when a bug comes along, we don't really care."
But these mitigations come with trade-offs to performance or maintainability—something that, he hinted, was a continuous struggle to convince upstream kernel maintainers to accept.
"Understanding that developing against upstream means you're not writing code for the kernel, you're writing code for the kernel developers," he said.
Not just the developers, either. It's for everyone in the Internet-connected world we now live in.
“If we are to start being mindful of this new era of computing,” said Kai, “we have to change the way we approach this dramatically, the same way that vehicle manufacturers in the 1970s did.”
29-09-2016, 13:54 #2
How to Crash Systemd
Systemd is dangerous not only because it is insecure, but because it is setting itself up to be irreplaceable.
The following command, when run as any user, will crash systemd:
NOTIFY_SOCKET=/run/systemd/notify systemd-notify ""
After running this command, PID 1 is hung in the pause system call. You can no longer start and stop daemons. inetd-style services no longer accept connections. You cannot cleanly reboot the system. The system feels generally unstable (e.g. ssh and su hang for 30 seconds since systemd is now integrated with the login system). All of this can be caused by a command that's short enough to fit in a Tweet.
The bug is remarkably banal. The above systemd-notify command sends a zero-length message to the world-accessible UNIX domain socket located at /run/systemd/notify. PID 1 receives the message and fails an assertion that the message length is greater than zero. Despite the banality, the bug is serious, as it allows any local user to trivially perform a denial-of-service attack against a critical system component.
The immediate question raised by this bug is what kind of quality assurance process would allow such a simple bug to exist for over two years (it was introduced in systemd 209). Isn't the empty string an obvious test case? One would hope that PID 1, the most important userspace process, would have better quality assurance than this. Unfortunately, it seems that crashes of PID 1 are not unusual, as a quick glance through the systemd commit log reveals commit messages such as:
coredump: turn off coredump collection only when PID 1 crashes, not when journald crashes
coredump: make sure to handle crashes of PID 1 and journald special
coredump: turn off coredump collection entirely after journald or PID 1 crashed
Systemd's problems run far deeper than this one bug. Systemd is defective by design. Writing bug-free software is extremely difficult. Even good programmers would inevitably introduce bugs into a project of the scale and complexity of systemd. However, good programmers recognize the difficulty of writing bug-free software and understand the importance of designing software in a way that minimizes the likelihood of bugs or at least reduces their impact. The systemd developers understand none of this, opting to cram an enormous amount of unnecessary complexity into PID 1, which runs as root and is written in a memory-unsafe language.
Some degree of complexity is to be expected, as systemd provides a number of useful and compelling features (although they did not invent them; they were just the first to aggressively market them). Whether or not systemd has made the right trade-off between features and complexity is a matter of debate. What is not debatable is that systemd's complexity does not belong in PID 1. As Rich Felker explained, the only job of PID 1 is to execute the real init system and reap zombies. Furthermore, the real init system, even when running as a non-PID 1 process, should be structured in a modular way such that a failure in one of the riskier components does not bring down the more critical components. For instance, a failure in the daemon management code should not prevent the system from being cleanly rebooted.
In particular, any code that accepts messages from untrustworthy sources like systemd-notify should run in a dedicated process as an unprivileged user. The unprivileged process parses and validates messages before passing them along to the privileged process. This is called privilege separation and has been a best practice in security-aware software for over a decade. Systemd, by contrast, does text parsing on messages from untrusted sources, in C, running as root in PID 1. If you think systemd doesn't need privilege separation because it only parses messages from local users, keep in mind that in the Internet era, local attacks tend to acquire remote vectors. Consider Shellshock, or the presentation at this year's systemd conference which is titled "Talking to systemd from a Web Browser."
Systemd's "we don't make mistakes" attitude towards security can be seen in other places, such as this code from the main() function of PID 1:
/* Disable the umask logic */
if (getpid() == 1)
Setting a umask of 0 means that, by default, any file created by systemd will be world-readable and -writable. Systemd defines a macro called RUN_WITH_UMASK which is used to temporarily set a more restrictive umask when systemd needs to create a file with different permissions. This is backwards. The default umask should be restrictive, so forgetting to change the umask when creating a file would result in a file that obviously doesn't work. This is called fail-safe design. Instead systemd is fail-open, so forgetting to change the umask (which has already happened twice) creates a file that works but is a potential security vulnerability.
The Linux ecosystem has fallen behind other operating systems in writing secure and robust software. While Microsoft was hardening Windows and Apple was developing iOS, open source software became complacent. However, I see improvement on the horizon. Heartbleed and Shellshock were wake-up calls that have led to increased scrutiny of open source software. Go and Rust are compelling, safe languages for writing the type of systems software that has traditionally been written in C. Systemd is dangerous not only because it is introducing hundreds of thousands of lines of complex C code without any regard to longstanding security practices like privilege separation or fail-safe design, but because it is setting itself up to be irreplaceable. Systemd is far more than an init system: it is becoming a secondary operating system kernel, providing a log server, a device manager, a container manager, a login manager, a DHCP client, a DNS resolver, and an NTP client. These services are largely interdependent and provide non-standard interfaces for other applications to use. This makes any one component of systemd hard to replace, which will prevent more secure alternatives from gaining adoption in the future.
Consider systemd's DNS resolver. DNS is a complicated, security-sensitive protocol. In August 2014, Lennart Poettering declared that "systemd-resolved is now a pretty complete caching DNS and LLMNR stub resolver." In reality, systemd-resolved failed to implement any of the documented best practices to protect against DNS cache poisoning. It was vulnerable to Dan Kaminsky's cache poisoning attack which was fixed in every other DNS server during a massive coordinated response in 2008 (and which had been fixed in djbdns in 1999). Although systemd doesn't force you to use systemd-resolved, it exposes a non-standard interface over DBUS which they encourage applications to use instead of the standard DNS protocol over port 53. If applications follow this recommendation, it will become impossible to replace systemd-resolved with a more secure DNS resolver, unless that DNS resolver opts to emulate systemd's non-standard DBUS API.
It is not too late to stop this. Although almost every Linux distribution now uses systemd for their init system, init was a soft target for systemd because the systems they replaced were so bad. That's not true for the other services which systemd is trying to replace such as network management, DNS, and NTP. Systemd offers very few compelling features over existing implementations, but does carry a large amount of risk. If you're a system administrator, resist the replacement of existing services and hold out for replacements that are more secure. If you're an application developer, do not use systemd's non-standard interfaces. There will be better alternatives in the future that are more secure than what we have now. But adopting them will only be possible if systemd has not destroyed the modularity and standards-compliance that make innovation possible.
30-09-2016, 18:35 #3
Android malware that can infiltrate corporate networks is spreading
DressCode has been found circulating in at least 3,000 Trojanized apps
Sep 30, 2016
An Android malware is spreading across app stores, including Google Play, and has the capability of stealing sensitive files from corporate networks.
DressCode, a family of Android malware, has been found circulating in at least 3,000 Trojanized apps, security firm Trend Micro said on Friday.
DressCode hides itself inside games, user interface themes, and phone optimization boosters. It can also be difficult to detect because the malicious coding only makes up a small portion of the overall app.
On Google Play, Trend Micro found more than 400 apps that are part of the DressCode family, it said. That's 10 times more than what security researchers at Check Point noticed a month ago.
Trend Micro added that one these apps on Google Play had been installed 100,000 to 500,000 times. Once installed, DressCode's malicious coding will contact its command and control servers and receive orders from its developers.
The malware is particularly dangerous because it can infiltrate whatever internet network the infected device connects to. Imagine a user bringing a phone to the office and connecting to the corporate network. The makers of DressCode could use the phone as a springboard to hack into the corporate network or download sensitive files, Trend Micro said.
"With the growth of Bring Your Own Device (BYOD) programs, more enterprises are exposing themselves to risk via carefree employee mobile usage," the security firm said.
According to Trend Micro, 82 percent of businesses have BYOD programs, allowing their employees to use personal devices for work functions.
The DressCode malware can also be used to turn infected devices into a botnet. This allows the infected devices to carry out distributed denial-of-service (DDOS) attacks or be used to send spam.
Trend Micro has found DressCode infecting enterprise users in the U.S., France, Israel, Ukraine, and other countries. The security firm is advising that users always check the online reviews for whatever apps they download.
Google didn't immediately respond to a request for comment on the malware.
Última edição por 5ms; 30-09-2016 às 18:37.