The age of the car app is upon us. You can use the manufacturer’s app for your shiny new BMW, for example, to find your car in crowded parking lots, light the lights, honk the horn, and defrost it when temperatures are subzero, all remotely. That’s the good news. The bad news is that apps mean security holes that can be exploited by bad actors, and they won’t just be flashing your headlights or sounding your car horn.
Many cities are already embarking on creative uses of systems, data, and applications, engaging with vendors, operations departments, and mobile device users to make life easier, be responsive, and cut expenses. However, security remains a critical issue. Among other things, smart cities multiply the opportunities of data theft and system sabotage for bad actors lying in wait for a chance to strike.
Gordon Gekko, lead baddie in the movies “Wall Street” and “Money Never Sleeps”, would surely feel tune with today’s cybercriminals. Virtual villainy doesn’t sleep either. Whether it’s a zero-day threat or some other app or software vulnerability, there’s always a hacker out there somewhere that’s ready to exploit a security defect, day or night.
Nobody (except a bad actor) wants their mobile app to wreak havoc on unsuspecting users, or expose them to glaring flaws in security. By looking inside the app to analyze the software, you can see how it behaves, but what about the domains and networks it links to, or apps that share some of the same code? How safe are they?
Don’t get us wrong – manual testing of software is still an important and valuable part of quality assurance, leveraging human intuition and inventiveness for proper coverage. But there’s a penalty to pay as well. Manual testing takes time and effort. When good automation can handle a large part of what would otherwise have been manual testing, in minutes instead of days, and at a fraction of the cost, insisting on manual testing could be a significant error.
Many apps ask for information and permissions at the time of installation to track user activity, and users often consent to these requests. However, when such an app is deleted by the user, such tracking should stop as well. Instead, using a technique known as fingerprinting, app creators can illicitly track mobile devices even after the app has been deleted, or before it is reinstalled.
If someone were to tell you that you have a mobile application on your phone that allows you to record audio, would you be concerned? It may alarm you, but that in and of itself is not enough to say the application is not secure. There are too many variables: What is the function of the application? Is the user aware of when he/she is being recorded? Is there a way to retrieve the audio remotely?
Coming to a smartphone near you today, augmented reality apps herald a new world of opportunities and dangers. While virtual reality (VR) and its separate virtual worlds have been in the public eye for a while, augmented reality (AR) with its subtle blending of both virtual and real contexts is a relative newcomer. However, now that Facebook is using its new camera tools to launch its own AR platform, overlaying graphics on the real world via your mobile screen, AR is likely to gain significantly in popularity.
There are two things wrong with the preconception that cyber criminals rely on tacky, free mobile apps to get victims to leak their financial details, so that the criminals can then empty their bank accounts. First, mobile data leaks also happen via many popular, well-perceived app brands. Second, financial data is only one part of the treasure trove for cyber criminals, who may find even richer gains by using additional, non-financial, personal data.