Google has revealed its security map in India, which will expand its developments in detecting fraud and fighting fraud across the country, the company’s largest market outside the United States.
Digital fraud in India is growing. Scam related to Indian government’s direct payment system increased by 85% on an annual basis In nearly 11 billion Indian rupees ($ 127 million) last year, according to government data. India has also seen various cases of digital arrest fraud, where scammers put as officials to displace money through video calls and predatory loan applications.
With its security map, Google aims to deal with some of these areas. The company has also begun Security Center In India, its fourth center after Dublin, Munich and Malaga.
Announced last year at the Google for India Summit, the GSEC Center will allow Google to work with the local community, including government, academic and students and small and medium -sized enterprises to create solutions to solve Cybersecurity, privacy, Engineering Engineering Heather in an interview with TechCrunch.
Google has worked with the Ministry of Interior’s Government Coordination Center (I4C) to raise the awareness of cyberspace, the company said in a blog position. This is based on the existing work of the company, including the launch of the Internet fraud recognition program, DAGOPEKATSOSmade his debut in 2023 to limit the harmful effects of malicious financial applications and predatory loan applications.
With GSEC in India, Google will focus on three main areas, Adkins told TechCrunch: the phenomenon of fraud and online fraud and how people are safe on the Internet. security in the cyberspace of businesses, the government and the critical infrastructure; and the building AI.
“These three areas will become part of our security map for India and in the coming years … We want to use the fact that we have a technological ability here to solve what is happening in India, near where users are,” Adkins said.
Globally, Google uses AI to combat online fraud and removing millions of ads and advertising accounts. The company aims to develop AI more extensively in India to combat digital fraud.
Google messages, which come from many Android devices, use AI -powered fraud detection, which helped protect users from over 500 million suspicious messages a month. Similarly, Google put the Play Protect in India last year, which claims to have prevented nearly 60 million high -risk installation attempts, resulting in more than 220,000 unique applications on over 13 million devices. Google Pay, which is one of the UPI -based payment applications in the country, also presented 41 million warnings against transactions suspected of possible scams.
–
Adkins, a founding member of the Google Security Team, who has been involved in the Internet for over 23 years, discussed many other topics during an interview with TechCrunch:
Adkins said that one thing that is the mind is the use and abuse of AI by malicious actors.
“Obviously we are watching AI very carefully and so far we have seen big linguistic models such as Gemini used as productivity improvements.
Adkins said Google is conducting extensive tests of AI models to ensure that they understand what they should not do.
“This is important for the created content that can be harmful and actions it can take,” Akins said.
Google is working in boxes including Secure the AI frameto limit the abuse of Gemini models. However, in order to protect the genetic AI from abuse and abuse of hackers in the future, the company sees the need for a framework for building security for how multiple agents communicate.
‘Industry is moving very, very quickly [by] Putting protocols out. It is almost like the early days of the internet, where everyone releases the code in real time and we think of security after the event, “Adkins said.
Google does not simply enter its own frameworks to limit the scope of the genetic AI abused by hackers. Instead, Adkins said the company is working with the research community and developers.
“One of the things you don’t want to do is limit yourself too much to early research days,” Adkins said.
To surveillance sellers
Along with the capabilities of AI genetics for abuse by hackers, Adkins sees trade sellers as a major threat. These may include spyware manufacturers, including the NSO Group, which is famous for the spyware of Pegasus or other small businesses selling surveillance tools.
“These are the companies that revolve all over the world and grow and sell a platform for hacking,” Adkins said. “You may pay $ 20. You can pay $ 200,000, just depending on the complexity of the platform. And it allows you to escalate the attack of people without any experience on your own.”
Some of these sellers also sell their tools to spy on people in the markets, including India. However, in addition to the goal of surveillance tools, the country has its own unique challenges in part for its size. The country not only sees the scams headed by AI and vocal clones, but also cases digital arrestswhich the Adkins simply emphasize regular frauds adapted to the digital world.
“You can see how fast the actors of the threat themselves go … I like to study cyberspace in this area because of it. It is often a hint of what we will see around the world at some point,” Adkins said.
About Multiple Factor Authentication
Google has long encouraged its users to use safer authentication methods than passwords to protect their presence on the internet. The company has previously returned to all user accounts and also promoted the material -based security keys mentioned by the Adkins, pointing out its employees using their laptops. The no password also becomes a popular technical term, with various concepts.
However, waiting for people to abandon passwords in a market such as India is tough due to the huge demographic elements and the different economic landscape.
“We knew for a long time that passwords were not safe. This concept of multiple factors was a step forward,” Adkins said, adding that the Indians may favor SMS -based authentication in relation to other MFA options.
