Skip to content

Posts from the ‘Shoddy Security’ Category

8
Oct

Google concealed a “software glitch” in Google+ that exposed data of half a million people

Irresponsibility is their policy:

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.

As part of its response to the incident, the Alphabet Inc. unit plans to announce a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+, the people said. The move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook Inc. and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

This revelation raises the question: what other dirty laundry is the Monster of Mountain View hiding?

Google executives have clearly relished watching Facebook take incoming fire in the press on a near constant basis this year. It’s no wonder they didn’t want to come clean about their own failings. But if they truly lived by their internal motto of “don’t be evil”, then they would have disclosed this glitch in the interest of transparency. How they expected to keep it a secret indefinitely is anyone’s guess.

It’s good that Google+ is shutting down. But the company must not be allowed to wash its hands of this incident and walk away. There should be consequences.

The European Union and the United States government should launch immediate investigations into this matter and find out what other secrets Google may be keeping from its users and stockholders.

 

23
Mar

Crooks infiltrate Google Play with malware in QR reading utilities

Google fails again… surprise, surprise:

SophosLabs just alerted us to a malware family that had infiltrated Google Play by presenting itself as a bunch of handy utilities.

Sophos detects this malware as Andr/HiddnAd-AJ, and the name gives you an inkling of what the rogue apps do: blast you with ads, but only after lying low for a while to lull you into a false sense of security.

We reported the offending apps to Google, and they’ve now been pulled from the Play Store, but not before some of them attracted more than 500,000 downloads.

The subterfuge used by the developers to keep Google’s “Play Protect” app-vetting process sweet seems surprisingly simple.

Prefer Android to iOS? Use F-Droid to get apps, NOT Google Play. There’s no malware lurking on F-Droid.

22
Nov

Google admits tracking users’ location even when location services are disabled

Big Brother is watching you. Even if you’ve told Big Brother Google you don’t want to be tracked.

Many people realize that smartphones track their locations. But what if you actively turn off location services, haven’t used any apps, and haven’t even inserted a carrier SIM card?

Even if you take all of those precautions, phones running Android software gather data about your location and send it back to Google when they’re connected to the internet, a Quartz investigation has revealed.

Since the beginning of 2017, Android phones have been collecting the addresses of nearby cellular towers—even when location services are disabled—and sending that data back to Google. The result is that Google, the unit of Alphabet behind Android, has access to data about individuals’ locations and their movements that go far beyond a reasonable consumer expectation of privacy.

Quartz observed the data collection occur and contacted Google, which confirmed the practice.

When confronted, Google claimed that the tracking was happening in part to improve message delivery, which Quartz rightly deemed to be a completely bogus explanation.

It is not clear how cell-tower addresses, transmitted as a data string that identifies a specific cell tower, could have been used to improve message delivery. But the privacy implications of the covert location-sharing practice are plain. While information about a single cell tower can only offer an approximation of where a mobile device actually is, multiple towers can be used to triangulate its location to within about a quarter-mile radius, or to a more exact pinpoint in urban areas, where cell towers are closer together.

The practice is troubling for people who’d prefer they weren’t tracked, especially for those such as law-enforcement officials or victims of domestic abuse who turn off location services thinking they’re fully concealing their whereabouts. Although the data sent to Google is encrypted, it could potentially be sent to a third party if the phone had been compromised with spyware or other methods of hacking. Each phone has a unique ID number, with which the location data can be associated.

Read the whole thing.

1
Nov

Google’s reCaptcha defeated again

NakedSecurity reports:

Researchers have created an automated system to solve Google’s reCAPTCHA auditory challenges.

Again.

Poor, poor prove-you’re-a-human reCAPTCHA tests – also known as Completely Automated Procedures for Telling Computers and Humans Apart – they get no respect!

The point of reCAPTCHA challenges is to act as a gate that lets humans through but stops or slows down bots (software robots), so a bot that can solve a CAPTCHA automatically defeats the whole object of reCAPTCHA. And yet, that’s precisely what keeps happening. There are three kinds, and they’ve all been automatically kicked over by researchers.

reCAPTCHA tests aren’t much of a hurdle for sophisticated spammers, but they definitely inconvenience and annoy users. Yet they are in widespread use all over the place. Time to get rid of them and replace them with something better.

18
Sep

Malware still lurking in the Google Play mobile app store

Embarrassing:

It seems almost too ironic that the Google Play Store has been secretly invaded by even more malware after it has promoted its Google Play Protect security platform for Android. Boasting of technologies like machine learning and artificial intelligence, Play Protect promises to protect Android users more thoroughly without having to increase manpower. Alas, it seems that another malware, named ExpensiveWall, has gotten past the Play Store’s security and this lapse is costing users a lot more than just peace of mind but actual money as well.

Check Point, the cybersecurity firm who reported this latest news, says that ExpensiveWall, named after one of its carriers, “Lovely Wallpaper” is actually a new variant of another malware discovered earlier this year. Both types of malware care costing users money by silently signing them up for premium subscriptions or sending premium SMS. Both strains have also made it past Google’s security checks and have been downloaded thousands of times by users.

SlashGear, which posted the report excerpted above, says Google needs to step its security game. Duh. Supposedly, that’s what they were doing when they launched “Play Protect”. But obviously, they failed.

Anyone who wants a secure mobile platform should invest in a BlackBerry device — and preferably one that runs the secure BlackBerry 10 operating system — to keep their data and networks secure.

11
Sep

Google releases new version of Chrome that incorporates a technology called “WebUSB”

USB, or Universal Series Bus, is already a technology that has a lot of security problems. Now Google is rushing to put into its increasingly dominant web browser (Chrome) a technology that allows websites to interface with USB devices via Javascript, which has to be one of the worst ideas they’ve ever come up with:

Google has wrapped up coding the desktop version of Chrome 61, and will be rolling it out for Windows, Mac and Linux “over the coming days/weeks”.

Chrome 61 extends the visibility of USB-connected devices to Web apps. First proposed last year, WebUSB was pitched as an easier way to set up USB devices, since (for example) a vendor’s site could use the API to push a config to a newly-connected gadget.

The feature’s focus, Google says, is on specialist devices that don’t have a standard way to advertise their capabilities. Keyboards or mice are easy, but as is explained in the specification, USB-connected educational devices (say, microscopes) or 3D printers aren’t conveniently accessible.

There’s also the vexed question of USB device updates: the Chrome devs explain WebUSB could let manufacturers update a device by getting users to visit the page and give permission to the update [What could possibly go wrong? – Reg].

What could possibly go wrong, indeed! That wasn’t just the reaction of the folks at The Register; it was also the reaction of a commenter at Phoronix, who also wisely said No thanks, Google.

We’ve learned over the past few years that everything connected to the internet tends to be less secure. Therefore, it follows that a device can be made more secure if it’s not connected to the internet. Perhaps we should strive to minimize how many devices can be connected directly to the internet by emphasizing localized control and asking ourselves, “Do we really need internet-controlled light-bulbs?”

This may not be to Google’s advantage, as it won’t be able to obtain as much data from non-internet-connected devices, but it may be to the benefit of the internet at large. Some devices may actually work better and be more useful when connected to the internet, but the majority of the “Internet of Things” probably doesn’t actually need an internet connection, especially if those devices can be controlled locally, either through a physical push of a button or through local networks such as Bluetooth, NFC, Thread, or other P2P mesh networking technologies. The latter could bring much of the same convenience of controlling a smart device from an app, without the downside of allowing someone from the other side of the world to connect to it as well.

Well said. Putting WebUSB in Chrome was a mistake. Then again, using Chrome is a mistake. LGB recommends Firefox instead, or one of its derivatives, like Waterfox or Pale Moon.

10
Nov

Millions upon millions of Android devices are vulnerable to remote hijacking

Oops:

Last month, Chinese security researchers uncovered a security vulnerability in an Android software library developed by the Chinese search giant Baidu, and when it comes to security vulnerabilities, this one’s a whopper. It allows an attacker to remotely wreak all sorts of havoc on someone’s phone, from sending fake SMS messages to downloading arbitrary files to installing other apps without the user’s authorization.

The widespread deployment of the vulnerable software library makes things even worse. The library, known as the Moplus SDK, is used by over 14,000 separate Android apps. By some estimates, as many as 100 million unique Android devices were vulnerable.

Google also shares responsibility for this vulnerability due to its all-or-nothing permissions regime:

Google is worried that giving users a choice about which apps are communicating about them could put a dent in their lucrative advertising business. After all, a flashlight app without Internet access can’t display ads.

The problem is that security and privacy are two sides of the same coin. By refusing to give users a choice about whether or not apps have Internet access, Google is putting its users at risk and sending the message that it cares more about its bottom line than its users’ security.

Fortunately for Google, this is an easy fix—just include Internet access as one of the permissions apps have to request in the next version of Android. Otherwise, Moplus SDK won’t be the last major Android security catastrophe.

We rather doubt that the Monster of Mountain View is going to put users ahead of profits. That would jeopardize the golden gravy train of advertising, and Google certainly doesn’t want that.

 

29
Aug

Google partner Samsung’s “smart” refrigerator turns out to be a hackable refrigerator, too

Whoops:

A team of hackers recently discovered a man-in-the-middle vulnerability in a Samsung smart refrigerator that can be exploited to steal Gmail users’ login credentials, The Register reported this week.

Hackers from security company Pen Test Partners discovered the flaw while participating in an Internet of Things (IoT) hacking challenge at the Def Con security conference earlier this month. The smart refrigerator, Samsung model RF28HMELBSR, is designed to integrate the user’s Gmail Calendar with its display. Samsung implemented SSL to secure the Gmail integration, but the hackers found that the device does not validate SSL certificates, opening the opportunity for hackers to access the network and monitor activity for the user name and password used to link the refrigerator to Gmail.

This story neatly demonstrates the folly of adding Internet connectivity to refrigerators, washing machines, toasters, coffeemakers, and other home appliances. Not everything that draws electric current in a home needs to be able to browse the Web and talk to Google’s data centers. But companies like Samsung are so obsessed with catching the next trend in consumer electronics (the next trend being the so-called Internet of Things) that they are adding extra, unnecessary, gee-whiz features to the appliances they’re making.

Our advice: Steer clear of tricked-out, IoT branded home appliances. You’ll save on energy costs, and you won’t end up with a home full of hackable devices in every room.

15
May

Security researcher exposes Google’s double standard on responsible disclosure of exploits

Via Ars Technica:

Vulnerabilities in the Google App Engine cloud platform make it possible for attackers to break out of a first-level security sandbox and execute malicious code in restricted areas of Google servers, a security researcher said Friday.

Adam Gowdiak, CEO of Poland-based Security Explorations, said there are seven separate vulnerabilities in the Google service, most of which he privately reported to Google three weeks ago. So far, he said, the flaws have gone unfixed, and he has yet to receive confirmation from Google officials. To exploit the flaws, attackers could use the freely available cloud platform to run a malicious Java application. That malicious Java app would then break out of the first sandboxing layer and execute code in the highly restricted native environment.

What’s interesting about this is that Google has previously disclosed flaws in Apple and Microsoft software before patches could be released, which made folks in Cupertino and Redmond very angry. Now Google’s getting a taste of its own medicine, except in this case, the researcher who delivered Google’s comeuppance waited a generous period of time without even getting so much as an acknowledgement from the Monster of Mountain View.

3
Feb

Avast sounds the alarm about apps on Google Play that pose as games and infect users with adware

A new report from Avast supplies more evidence for the conclusion that Google Android is the least secure major mobile operating system there is.

A couple of days ago, a user posted a comment on our forum regarding apps harboring adware that can be found on Google Play. This didn’t seem like anything spectacular at the beginning, but once I took a closer look it turned out that this malware was a bit bigger than I initially thought. First of all, the apps are on Google Play, meaning that they have a huge target audience – in English speaking and other language regions as well. Second, the apps were already downloaded by millions of users and third, I was surprised that the adware lead to some legitimate companies.

The Durak card game app was the most widespread of the malicious apps with 5 – 10 million installations according to Google Play.

When you install Durak, it seems to be a completely normal and well working gaming app. This was the same for the other apps, which included an IQ test and a history app. This impression remains until you reboot your device and wait for a couple of days. After a week, you might start to feel there is something wrong with your device. Some of the apps wait up to 30 days until they show their true colors. After 30 days, I guess not many people would know which app is causing abnormal behavior on their phone, right? :)

Each time you unlock your device an ad is presented to you, warning you about a problem, e.g. that your device is infected, out of date or full of porn. This, of course, is a complete lie. You are then asked to take action, however, if you approve you get re-directed to harmful threats on fake pages, like dubious app stores and apps that attempt to send premium SMS behind your back or to apps that simply collect too much of your data for comfort while offering you no additional value.

This is awful, but not surprising.

Considering how much control Google wields over its store and branded builds of the Android operating system, you might be tempted to think that apps would be well-screened and that this kind of problem wouldn’t exist. But it does, because Google is synonymous with shoddy security.

The NSA has long considered the Monster of Mountain View one-stop shopping, and it wasn’t until Edward Snowden leaked a mountain of data that Google decided to start encrypting the traffic that flowed between their servers. That’s progress, for sure, but it doesn’t change the fact that Google’s business model is itself built on user surveillance and data mining. Repeated reports like this show us that Android is not for anyone who cares about privacy or security.