Ofer For You (1)

Thursday 3 October 2019

Alexa Got Hacked | SQL Injection of Voice


Based on your previous coverage of similar topics, we thought you may be interested in this recently discovered voice activation device hack, and a new technique we call Voice-Command SQL Injection. 



Here are the highlights:

  1. Hacker uses a voice-command SQL injection methodology to extract unauthorized data from the application database- including the admin account.
  2. This is done on an Alexa device but could be performed on any voice-activated digital assistant.
  3. This could mean that users of virtual assistant skills/applications could be exposed to new attacks.
  4. The outcome: The tried and true SQL injection attack has is now voice-enabled.


Write-up:

Picture this, your bank account was hacked. While not new news, what if it was hacked using simply voice and Alexa? Pretty scary, right?
We all know that not all web applications are created equal, as each one has different levels of security measures in place to protect the information or data, as well as application access. Unfortunately, the average user of the application has very little knowledge as to the security of each application, whether it be their financial, retail, utility, fitness applications, etc.

Luckily, there are regulatory requirements surrounding security measures for applications in certain industries like FinServe (FINRA), Healthcare (HIPAA), and Retail (PCI-DSS), but what about other industries not impacted by regulatory compliance? Or even within those, how have security protocols evolved to protect the applications and skills when applied to new channels, like Alexa, Google Assist, Cortana, Siri, etc.  
In fact, now it is easier than ever for hackers to perform such hacks into a variety of applications, just using their voice. Leveraging voice-command SQL injection techniques, hackers can give simple commands utilizing voice text translations to gain access to applications and breach sensitive account information. 

To illustrate the vulnerability and create greater awareness, Protego’s Head of Security and Ethical Hacker, Tal Melamed illustrates how a simple SQL Injection can be executed through a verbal command in order to gain unauthorized access to sensitive account data. This demo will show how, in this instance, Alexa, can be exploited in an unprotected application or skill, by translating words and numbers.
Tal will illustrate how easy it is to gain unauthorized access through Alexa to unsecured applications, by verbally providing simple account numbers and text. Since Tal is an ethical hacker, he will be using an application and SQL database he built himself, but in reality, it could be any application that requires an account number or text as a unique identifier.



The demo highlights the following:

  1. Tal will try to access an admin account that he is unauthorized to access according to name identification and account ID.
  2. Alexa will at first deny his request.
  3. Tal will then bypass this denial by calling a random number with syntax that would trigger the SQL.
  4. When asked for an account ID he simply says a random number and adds “or/true” which grants him access to any line in the database
  5. Alexa then provides Tal with the balance information of the unauthorized Admin account 
  6. It is important to note that this demo is not highlighting a vulnerability with Alexa, rather, the actual applications using Alexa.


In fact, any application that was potentially susceptible to an attack by text input (think a hacker and a keyboard) would be susceptible to a voice-based attack as depicted here. All that is needed are these three conditions:

  1. the virtual assistant function/skill is using SQL as a database
  2. the function/skill is vulnerable to SQL injection
  3. one of the (vulnerable) SQL queries includes an integer value as part of the query
  4. Of course, there are other similar attacks, not based on SQL (e.g. command injection) that might also be relevant as these rapidly deployed skills/applications don't necessarily lend themselves to putting a WAF in front of them.



In environments where there is no true perimeter around the applications themselves, security teams need to work with developers on building security around the code itself. It is the only way to ensure applications are protected from all fronts against malicious attacks, including virtual assistants.

If additional application security measures were in place, whether hosted in serverless or other cloud-infrastructure, Alexa wouldn’t be able to access any secure data, even when attempting an SQL injection such as this. 

It is a new game of cat and mouse with hackers as there are hundreds of skills being developed by organizations each day, and attackers have now found a new way to try and access these applications. Now we have to make sure we are prepared.

Next Post: Alexa, The spy.


No comments:

Post a Comment