Countermeasures to Intel’s Biggest Vulnerability

January 17, 2018  |  By Taesoo Kim
Assistant Professor, School of Computer Science

A cybersecurity vulnerability recently revealed in Intel Corp. microchips is similar to risks that have been studied for the past two years by Georgia Tech’s Taesoo Kim, assistant professor of computer science. Kim explains the countermeasures his team proposed in 2016 and why security education must become a priority in computer science degree programs.


In early 2016, Georgia Tech began conversing with Intel Corp. about an exploit we discovered that could present large, widespread risks to any system using its microchips. The exploit, which we named “Dr. K,” could break kernel security and reveal the layout of address space kernel memory, which helps attackers bypass the last lane mitigation employed by an operating system.

The exploit is similar to “Meltdown” and “Spectre," two recently identified vulnerabilities that leak the contents of kernel memory. 

Four Ph.D. students and I visited Intel’s campus that fall, and spent a full day explaining our body of work – five research papers in all – about risks to Intel technology. Academia discloses vulnerabilities to original equipment manufacturers before publication so that affected technology providers can develop a patch before hackers can act. University researchers also propose solutions.

Of the six countermeasures we outlined in 2016, three led to patches announced or provided this month by Intel, Microsoft, Linux, Apple, and Mozilla. Intel is performing a hardware modification; Mozilla adopted the “noisy timer” approach; and others will patch the problem with “kernel page-table separation.” Although the latter does degrade computer performance, we believe this is justified because of the severity of Meltdown and Spectre. The operating system patches to mitigate against Meltdown and Spectre will fix Dr. K too.

However, the ideal countermeasure is to deeply integrate the security goals when designing chipsets instead of pondering security after the fact. To do so, all software and hardware engineers should be well aware of computer security. This is a countermeasure that only university administrators and lawmakers can adopt, but students can demand: security education. I firmly believe that security education must be a core component of every computer science degree program. Georgia Tech is one of the few universities where it is.

It is said that Intel lost $11.3 billion in shareholder value after the announcement. Clearly, security mistakes have a big cost. Too many products are rushed to market without security as a consideration at the first stage of design. Too many hardware architects do not understand security. I believe that when educators and lawmakers make security education a priority, better products will result.


For more about this research: