Hacking nuclear power plants: Stuxnet, Duqu and the anatomy of a virus
We’ve seen countless real-world examples of havoc unleashed upon web sites, databases and other digital systems. However, taking control of the physical processes of critical facilities has been largely theoretical, or the stuff of science fiction and action films.
Stuxnet was the first example of exactly this sort of threat succeeding ‘in the wild,' and the importance of this cannot be overstated. Even more worrying is the fact that the world’s best security researchers are still unraveling the potentially deadly threads of Stuxnet’s years-old code -- and there are new iterations, such as Duqu, whose code is partly unrecognizable to begin with.
Dissecting the Worm
In addition to exploiting known vulnerabilities in Windows, Stuxnet code contained multiple zero-day attacks -- attacks especially valuable to hackers because they take advantage of a security hole unknown to the developer or allied security researchers, and which must be understood and reverse-engineered themselves before a patch or fix can be applied. A single zero-day attack can be sufficient to allow remote access, costly disruption or catastrophic damage to a system, before and even after discovery -- and Stuxnet included at least four.
Additionally, Stuxnet included specific VeriSign digital signatures from RealTek (and when those were revoked, from JMicron). The depth of encryption of the digital signatures virtually assured that they were stolen (or possibly provided to the creators) rather than decrypted. Most likely spread “promiscuously” via USB drives, the worm was designed to remain inert until specific Siemens hardware was detected, and to limit its own proliferation so that no more than three copies would be made.
Another Stuxnet first was the implementation of a rootkit for a programmable logic controller (PLC). A rootkit is a potentially dangerous type of software that buries itself at the deepest level of access (the ‘root’). Although not inherently destructive in and of itself, rootkits are used to grant continued access to every level of the controlled system -- including any process designed to detect and prevent threats. Rootkits have been a serious nuisance for years, whether deployed by malicious hackers or the music industry. Stuxnet, however, brought the threat to a PLC -- a specialized computer that controls motorized electromechanical processes such as assembly lines and amusement park rides -- or the centrifuges used in nuclear enrichment.
Detonating the Payload
“An Iranian IR-1 centrifuge normally spins at 1,064 hertz, or cycles per second,” reported Der Spigel’s Holger Stark. “When the rotors began going haywire, they increased their frequency to 1,410 hertz for 15 minutes and then returned to their normal frequency. The virus took over control again 27 days later, but this time it slowed down the rotors to a frequency of a few hundred hertz for a full 50 minutes. The resulting excessive centrifugal force caused the aluminum tubes to expand, increasing the risk of parts coming into contact with one another and thereby destroying the centrifuges ... six cascades containing 164 centrifuges each were reportedly destroyed in this manner.”
Unlike the vast majority of cyberattacks, the chief challenge for the Stuxnet attack was to find a way into a network that had no connection to the Internet at large. In a sense, this was like developing a disease that could only harm a single species of fish in an isolated pond in Iceland -- and releasing it, via an infected crab, into the South Pacific Ocean. Discovery in the wild and related warnings occurred over a year before the true effect and target of Stuxnet became apparent.
The detailed and wide-ranging knowledge necessary to develop an attack on the scale of Stuxnet is far more specialized than the general-purpose tools in most hackers’ arsenals. All of this strongly implies, if not completely confirms, that the Stuxnet worm was an extremely focused, specifically targeted attack. In fact, security experts almost universally agree that such an attack is beyond the capabilities of even the most advanced hacker groups -- although not beyond the resources of nations (alone or allied), or perhaps of multinational corporations and terrorist organizations.
"And from the SCADA side of things, which is a very specialized area, they would have needed the actual physical hardware for testing, and know how the specific factory floor works," said Symantec’s Liam O’ Murchu. "Someone had to sit down and say, ‘I want to be able to control something on the factory floor, I want it to spread quietly, I need to have several zero-days,’ and then pull together all these resources. It was a big, big project."
"This was a very important project to whomever was behind it," said O’ Murchu. "But when an oil pipeline or a power plant is involved, the stakes are very high."
The Duqu Framework
The more-recently discovered Duqu shares enough common code with Stuxnet to make a safe assumption that the same people were either behind both, or at least had some access to the original code. The use of digital signatures continues, with Duqu utilizing C-Media (a Taiwanese-based company, as are RealTek and JMicron). Like Stuxnet, Duqu infiltrates the SCADA (supervisory control and data acquisition), the network that coordinates control of industrial and infrastructure machinery over large areas (from a plant site to an entire country).
Unlike Stuxnet, Duqu does not replicate itself in even a limited manner, but can be commanded remotely to spread. The sample recovered by CrySyS requires the use of a “dropper” that exploits a zero-day vulnerability in Microsoft Word (via email). And Duqu also contains mysterious code that is currently confounding security researchers, a so-called “Duqu Framework” that does not conform to any known programming language.
Most importantly, Duqu is a ‘recon’ worm, designed to use a keylogger and other means to extract information rather than take control of hardware -- most likely as a “precursor to a future Stuxnet-like attack.”
It’s almost irrelevant that updated centrifuges are far less vulnerable to damage via expansion, or that Microsoft has patched Autoplay / Autorun vulnerabilities, or that Siemens has adopted a solution involving “whitelisting” (restricting a network to a particular list of ‘approved’ devices or connections). Stuxnet was developed, deployed and at least partly succeeded before being fully detected and protected against -- despite the nearly unimaginable obstacles placed before it. And the language of Duqu hasn’t even been fully ‘translated’ yet.
Through the Looking Glass
Engineers can certainly attempt to minimize the potential for any physical means of sabotage, and security professionals are dedicated to protecting systems before during and after the fact. However, no system can ever be made perfectly invulnerable -- the more complex a system is, the more potential exists for zero-day exploits and other unforeseen means of entry. Stuxnet is merely the first sign of an ongoing risk to any facility that provides critical infrastructure importance and/or presents a potential physical danger to its surroundings.
In a way, it’s a shame that we’re so familiar with similar fictional plots; trying to communicate the significance of Stuxnet was likely to be greeted with ‘yeah, but couldn’t they do that already?’ Part of this can be blamed on the inevitable blurring of the actual effects due to political and media filtering. But in its own way, Stuxnet truly was a turning point on the order of 9/11 -- when the 20th Century’s threat of harm from The Other became the 21st Century’s tangible daily reality. And in the case of Stuxnet, Duqu and whatever else may be out there, we still don’t KNOW who this Other really is, or how many there may be.