Back when I was young, people were taught not to install or run software from untrusted parties, very much like not to accept candy from strangers, or not to let the Big Bad Wolf in. Prevention is better than cure. Then came along application macros and the World Wide Web of Execution, that embed software in data files and web pages, respectively, so that it is automatically executed when viewing the file or the web page. Such automatic execution of untrusted software paved the way for plenty of malicious software, from traditional trojan horses, that, standing as desirable programs or data files, contain executable code that grants malicious third parties control over the computer, to stealthy cryptocoin miners in web pages, that spend users' compute power to surreptitiously transfer wealth to third parties.
Meltdown and Spectre are not traditional attacks on software vulnerabilities that remotely take over control of computers: they rather enable even unprivileged software in sandboxed environments to access data in memory that is not and should not be directly accessible to such software. For example, while waiting for the completion of some slow operation, say checking whether access to a memory location should be granted, the processor may go ahead and tentatively perform likely subsequent operations. They might even use the data, from memory not yet known to be inaccessible, which may in turn bring to the cache other memory locations dependent on the data. Access time can determine whether memory locations are cached. Combined, these effects create the hardware side channel exploited by Meltdown and some Spectre variants to allow access to data in memory that should not be accessible. Other variants are already known, and it seems likely that more will be discovered and published, even involving other hardware features and side channels.
Now, really, how bad is that? Very bad, you might think, supposing you hold private data in your computer. But how bad would that be if all the software running on your computer was software you could trust, running under your control? Would it be a problem if one program, serving you, got access to internal data of another, also serving you? Surely you could have changed the latter program to grant the former direct access to the data it needed, instead of resorting to side channels. Why should you not do that? Maybe you're not allowed to make changes to one of the programs--but then could you still say the software is under your control?
Free Software is under community control, since users can cooperate to collectively audit the software and change it to suit their collective needs. It is also under individual control, as any user can, independently or with help from third parties chosen and trusted by the user, audit the software and adapt it so that the version on the users' computer behaves just the way the user wishes.
The Free Software movement has long defended the essential software freedoms as human rights grounded on ethics, and that users should perform their computing on computers under their control, using only software that respects their freedoms. This means using Free application software running on a Free operating system, avoiding any non-Free components that could interfere with the computing, be they applications, plugins, addons, libraries, or even peripheral, mainboard or cpu firmware. Non-Free Software fragments embedded in data files and web pages are not to be overlooked.
Even remote software services, being the users' computing, should be performed by Free Software running on computing devices under the users' control. This does not rule out hiring virtual computing devices from third parties, although it might require trusting the provider a little more than co-locating your own server, or keeping on premises a server that could grant third parties remote control through such preloaded firmware as Intel ME or AMD PSP. There is no ethical obstacle to using or offering such virtual computing devices, as long as the provider refrains from inspecting or interfering with customers' computing, and strives to not allow third parties to do so. For example, if the chosen hardware would allow one customer to access anothers' data, or interfere with anothers' computing, the provider should arrange for such separation, if not through software or hardware features, by assigning them to separate hardware.
While virtual server providers panicked as the features expected to enforce customer separation melted down, users who have refused to rely on non-Free Software or services as software substitutes for their own computing, relying on trustworthy Free Software on their own computers instead, have had little reason to worry about Spectre and Meltdown. When we can and do verify that all the software we use can be trusted to do just what we expect, the hardware side channels exploited by Spectre or Meltdown pose no threat, because the software we use serves us.
Any unwanted, deliberate features to obtain information through side channels, and to then transfer it to third parties, would most likely be noticed in individual or community code audits, if not caught as soon as their implementations are contributed to the software project.
Unlike elaborate features to exploit such side channels, that would be very hard to miss, coding mistakes might slip through, but those would still be caught by the existing if imperfect hardware features: despite the side channels, they still protect against accidental attempts to access directly data that should not be accessible.
Still, freedom doesn't magically repel each and every threat; rather, freedom, and control of our software, give us the opportunity to protect ourselves and each other. E.g., software freedom does not protect you from remote NetSpectre attacks, but if all the software running on computers under your control is Free Software, you can scan its source code for remotely-exploitable gadgets, modify them so that they are no longer exploitable, and be assured that none remain hiding in binary blobs, because such blobs do not belong in Free Software.
When it comes to software downloaded and run on a browser, absent better infrastructure to give users ultimate control, every single use may require a new audit. Meanwhile, blocking by default, even when the code is marked as Free Software, might be a safer policy. Trust isn't so easy to earn.
Software mitigations have been proposed to make it harder for malicious software to exploit Meltdown and Spectre. Some of the mitigations depend on modified cpu microcode. That amounts to throwing non-Free Software at the non-problem. Who knows what undesirable features would be brought in along with the promised mitigations? It is not like (non-Free Software) updates have never been abused to impose changes users would rather not have.
Indeed, most of these software mitigations, involving microcode or not, make the system run slower, in some cases a lot slower. Freedom-minded users have no need to incur that performance loss and additional power consumption, just to keep information from being obtained by other trusted programs running under our own control.
Copyright 2018 Alexandre Oliva. Who's afraid of Spectre & Meltdown? by Alexandre Oliva is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported license. Please copy and share.