Last month, Chinese security researchers uncovered a security vulnerability in an Android software library developed by the Chinese search giant Baidu, and when it comes to security vulnerabilities, this one’s a whopper. It allows an attacker to remotely wreak all sorts of havoc on someone’s phone, from sending fake SMS messages to downloading arbitrary files to installing other apps without the user’s authorization.
The widespread deployment of the vulnerable software library makes things even worse. The library, known as the Moplus SDK, is used by over 14,000 separate Android apps. By some estimates, as many as 100 million unique Android devices were vulnerable.
Google also shares responsibility for this vulnerability due to its all-or-nothing permissions regime:
Google is worried that giving users a choice about which apps are communicating about them could put a dent in their lucrative advertising business. After all, a flashlight app without Internet access can’t display ads.
The problem is that security and privacy are two sides of the same coin. By refusing to give users a choice about whether or not apps have Internet access, Google is putting its users at risk and sending the message that it cares more about its bottom line than its users’ security.
Fortunately for Google, this is an easy fix—just include Internet access as one of the permissions apps have to request in the next version of Android. Otherwise, Moplus SDK won’t be the last major Android security catastrophe.
We rather doubt that the Monster of Mountain View is going to put users ahead of profits. That would jeopardize the golden gravy train of advertising, and Google certainly doesn’t want that.
A team of hackers recently discovered a man-in-the-middle vulnerability in a Samsung smart refrigerator that can be exploited to steal Gmail users’ login credentials, The Register reported this week.
Hackers from security company Pen Test Partners discovered the flaw while participating in an Internet of Things (IoT) hacking challenge at the Def Con security conference earlier this month. The smart refrigerator, Samsung model RF28HMELBSR, is designed to integrate the user’s Gmail Calendar with its display. Samsung implemented SSL to secure the Gmail integration, but the hackers found that the device does not validate SSL certificates, opening the opportunity for hackers to access the network and monitor activity for the user name and password used to link the refrigerator to Gmail.
This story neatly demonstrates the folly of adding Internet connectivity to refrigerators, washing machines, toasters, coffeemakers, and other home appliances. Not everything that draws electric current in a home needs to be able to browse the Web and talk to Google’s data centers. But companies like Samsung are so obsessed with catching the next trend in consumer electronics (the next trend being the so-called Internet of Things) that they are adding extra, unnecessary, gee-whiz features to the appliances they’re making.
Our advice: Steer clear of tricked-out, IoT branded home appliances. You’ll save on energy costs, and you won’t end up with a home full of hackable devices in every room.
Vulnerabilities in the Google App Engine cloud platform make it possible for attackers to break out of a first-level security sandbox and execute malicious code in restricted areas of Google servers, a security researcher said Friday.
Adam Gowdiak, CEO of Poland-based Security Explorations, said there are seven separate vulnerabilities in the Google service, most of which he privately reported to Google three weeks ago. So far, he said, the flaws have gone unfixed, and he has yet to receive confirmation from Google officials. To exploit the flaws, attackers could use the freely available cloud platform to run a malicious Java application. That malicious Java app would then break out of the first sandboxing layer and execute code in the highly restricted native environment.
What’s interesting about this is that Google has previously disclosed flaws in Apple and Microsoft software before patches could be released, which made folks in Cupertino and Redmond very angry. Now Google’s getting a taste of its own medicine, except in this case, the researcher who delivered Google’s comeuppance waited a generous period of time without even getting so much as an acknowledgement from the Monster of Mountain View.
A new report from Avast supplies more evidence for the conclusion that Google Android is the least secure major mobile operating system there is.
A couple of days ago, a user posted a comment on our forum regarding apps harboring adware that can be found on Google Play. This didn’t seem like anything spectacular at the beginning, but once I took a closer look it turned out that this malware was a bit bigger than I initially thought. First of all, the apps are on Google Play, meaning that they have a huge target audience – in English speaking and other language regions as well. Second, the apps were already downloaded by millions of users and third, I was surprised that the adware lead to some legitimate companies.
The Durak card game app was the most widespread of the malicious apps with 5 – 10 million installations according to Google Play.
When you install Durak, it seems to be a completely normal and well working gaming app. This was the same for the other apps, which included an IQ test and a history app. This impression remains until you reboot your device and wait for a couple of days. After a week, you might start to feel there is something wrong with your device. Some of the apps wait up to 30 days until they show their true colors. After 30 days, I guess not many people would know which app is causing abnormal behavior on their phone, right?
Each time you unlock your device an ad is presented to you, warning you about a problem, e.g. that your device is infected, out of date or full of porn. This, of course, is a complete lie. You are then asked to take action, however, if you approve you get re-directed to harmful threats on fake pages, like dubious app stores and apps that attempt to send premium SMS behind your back or to apps that simply collect too much of your data for comfort while offering you no additional value.
This is awful, but not surprising.
Considering how much control Google wields over its store and branded builds of the Android operating system, you might be tempted to think that apps would be well-screened and that this kind of problem wouldn’t exist. But it does, because Google is synonymous with shoddy security.
The NSA has long considered the Monster of Mountain View one-stop shopping, and it wasn’t until Edward Snowden leaked a mountain of data that Google decided to start encrypting the traffic that flowed between their servers. That’s progress, for sure, but it doesn’t change the fact that Google’s business model is itself built on user surveillance and data mining. Repeated reports like this show us that Android is not for anyone who cares about privacy or security.
Millions of Android users could be at risk as Google cuts back on security updates for older versions of its smartphone operating system.
The risk arises because Google has stopped producing security updates for parts of those older versions.
About 60% of all Android users, those on Android 4.3 or older, will be affected by the change.
The researchers who uncovered the policy change said it was “great news for criminals”.
How ironic: The company that made “do no evil” its motto is now increasingly a friend to evildoers as well as the National Security Agency.
Tod Beardsley and Joe Vennix from security firm Rapid7 and independent vulnerability finder Rafay Baloch contacted Google to let it know about the loophole. They expected to hear about the work Google was doing to patch the bug but instead were told that it was now only fixing bugs found in the two most recent versions of Android known as Kitkat (4.4) and Lollipop (5.0).
In a blogpost, Mr Beardsley said Google’s Android security team told him it would “welcome” a patch from the researchers if they produced one but would not be making one itself. It added that it would tell its Android partners about the bug even though no fix would be forthcoming.
Mr Beardsley said the response was so “bizarre” that he contacted Google for clarification and was told again that many components of Android in earlier versions of the OS would not be getting fixes.
Tod Beardsley is to be commended for exposing Google as an irresponsible software developer. It is truly appropriate that two of the news categories here on Leave Google Behind are Shoddy Security and Undependable Support. That’s exactly what you get when you buy a product running Google software, especially mass-produced Android smartphones. Google will gladly keep on tracking you even while they leave the holes in the operating system they made for your phone unpatched.
A word to the wise: Leave Google Behind. Stay far, far, far away from Android. Get a phone running BlackBerry, Windows Phone, or Firefox OS instead.
[I]n some cases GCHQ and the NSA appear to have taken a more aggressive and controversial route—on at least one occasion bypassing the need to approach Google directly by performing a man-in-the-middle attack to impersonate Google security certificates. One document published by Fantastico, apparently taken from an NSA presentation that also contains some GCHQ slides, describes “how the attack was done” to apparently snoop on SSL traffic. The document illustrates with a diagram how one of the agencies appears to have hacked into a target’s Internet router and covertly redirected targeted Google traffic using a fake security certificate so it could intercept the information in unencrypted format.
Documents from GCHQ’s “network exploitation” unit show that it operates a program called “FLYING PIG” that was started up in response to an increasing use of SSL encryption by email providers like Yahoo, Google, and Hotmail. The FLYING PIG system appears to allow it to identify information related to use of the anonymity browser Tor (it has the option to query “Tor events”) and also allows spies to collect information about specific SSL encryption certificates.
GCHQ, for those who don’t know, is the British equivalent of the NSA.
So much for Google’s security measures. Forced SSL may deter petty man-in-the-middle attacks from amateur hackers, but it doesn’t shield anyone from the likes of the NSA.
This isn’t to say that SSL is useless and shouldn’t be used. HTTPS is better than HTTP. But if Google was serious about security and protecting its users, it would make Gmail like Hushmail, offering the ability to encrypt entire user accounts and encrypt messages. There are enough Gmail users that offering encryption by default would have an immediate and huge effect on email security.
But, of course, if Google were to offer such encryption, it would no longer be able to read its users’ emails and place targeted ads within Gmail. Messages would be scrambled and unreadable by Google’s algorithms. So Google is never going to do what Hushmail does. It would interfere with their ability to offer “free” Gmail.
Google has been caught hosting more than a dozen malicious titles in its official Android app market. Some had been downloaded tens of thousands of times and turn smartphones into zombies that await commands from their attacker overlords, security researchers said.
A stash of 17 malicious apps remained freely available in the Google Play store, according to a blog post published Thursday by researchers from antivirus provider Trend Micro. Six of those titles contained a highly stealthy code dubbed Plankton, which causes Android-based phones to connect to command and control servers and wait for commands. At least 10 Plankton-based apps found last year in the Android market collected users’ browsing history, bookmarks, and device information and sent them to servers under the control of the attackers.
Isn’t one of the major justifications for walled garden-style app stores like “Google Play” to protect users? To prevent people from downloading malicious software? (Yes, that was a rhetorical question).
There’s no question app stores have been successful in allowing companies like Apple and Google to wield a huge degree of control over the user experience on their mobile platforms. But while that control may be good for the corporate bottom line (it keeps people locked in), it’s bad for user freedom, privacy, and security, as this report makes clear.
First it was the Sesame Street YouTube account. Now it’s Microsoft’s:
Microsoft’s official YouTube channel appears to have been taken over by someone not affiliated with the company, who has removed all of the videos and posted solicitations for sponsorships, apparently anticipating an influx of traffic as the news spreads.
I subscribe to the company’s YouTube updates and received notification this morning of two new uploads by the company, both of them rudimentary videos apparently soliciting advertisements for the channel. Since then a third video has been uploaded, along the same lines.
A YouTube account is, of course, a Google account – meaning that someone who signs up for YouTube isn’t just creating an identity on YouTube; they’re creating an identity that can be used with other Google offerings, such as Gmail, Blogger, Picasa, or Google Maps. The list goes on… and on… and on…
Who knows what else the hackers compromised that was in that Google account?
This is why it’s a bad idea to do business the Google way. Google’s account security sucks, as demonstrated by these recent break-ins.
When you trust one company to handle your emails, chat, videos, documents, pictures, credit card data, and other sensitive information, you’re asking for trouble. A lot of trouble.
The City of Los Angeles made a big mistake when it decided to do business with the Monster of Mountain View. Now the city is trying to get a partial refund from Google because some of its departments refuse to use Google’s insecure Apps offering:
Two years after the City of Los Angeles approved a $7.25 million deal to move its e-mail and productivity infrastructure to Google Apps, the migration has still not been completed because the Los Angeles Police Department and other agencies are unsatisfied with Google’s security related to the handling of criminal history data.
Los Angeles officials originally expected to roll Google Apps out to its 30,000 users by June 2010, in partnership with systems integration contractor CSC. But that number has been reduced to about 17,000 employees, largely because of security objections raised by the LAPD and other safety-related departments. Advocacy group Consumer Watchdog opposed the deal, and this week released a letter LA officials sent to CSC in August, which states “The City is in receipt of your letter dated May 13, 2011, wherein CSC indicates that it is unable to meet the security requirements of the City and the Los Angeles Police Department (LAPD) for all data and information, pursuant to U.S. DOJ Criminal Justice Information Systems (CJIS) policy requirements.”
Google has a poor reputation when it comes to privacy and security. That’s because Google’s business model is built on collecting as much user data as possible and monetizing it. Google’s response to security problems has been to collect even more personal information; these days, Gmail users are asked to associate their mobile phone numbers with their Google accounts, all in the name of improved security.
Of course, if a person who uses only GoogleTech loses their Android phone, their email, contacts, web history, and so much more could all be compromised simultaneously. That’s the danger of trusting one company with your data.
Google may see its Chrome operating system as more secure than traditional alternatives, but one security researcher believes the cloud-based OS is vulnerable, according to a Reuters story published yesterday.
WhiteHat Security researcher Matt Johansen said he found a flaw in a Chrome OS application that he was able to exploit to gain control of a Google e-mail account. Though Google fixed the flaw after it was reported, Johansen claims to have discovered other applications with the same flaw, Reuters said.
In citing the security holes in Chrome OS, Johansen specifically pointed to the ability of hackers who can steal data as it moves between the cloud and the Chrome OS browser instead of hacking directly into a user’s PC.
“I can get at your online banking or your Facebook profile or your e-mail as it is being loaded in the browser,” he told Reuters. “If I can exploit some kind of Web application to access that data, then I couldn’t care less what is on the hard drive.”
Google’s “Chromebooks” are basically nothing more than glorified computer security terminals providing access to Google’s opaque datacenters with sod-all security. People concerned about their privacy and security would do well to stay far, far away from Google’s offerings.
Google, of course, reacted very defensively when asked for comment about this. They’d like people to believe their products are secure. But reality has proved otherwise. Security seems to be an afterthought as far as Google is concerned. That’s because Google’s business isn’t security, it’s data-mining.