Advertisement
trendingNowenglish2578555https://zeenews.india.com/technology/banned-2-9-million-accounts-in-january-to-combat-abuse-on-our-platform-whatsapp-2578555.html

Banned 2.9 Million Accounts In January "To Combat Abuse On Our Platform": WhatsApp

As captured in the latest monthly report, WhatsApp banned over 2.9 million accounts in January," the spokesperson said.

Banned 2.9 Million Accounts In January "To Combat Abuse On Our Platform": WhatsApp File Photo

New Delhi: Social messaging platform WhatsApp on Wednesday said it banned over 2.9 million accounts in the country in the month of January to "combat abuse." "WhatsApp is an industry leader in preventing abuse, among end-to-end encrypted messaging services," a spokesperson of the social messaging platform said. In order to keep "our users safe on our platform," WhatsApp over the years has consistently invested in artificial intelligence and other state-of-the-art technology, data scientists, and experts, the spokesperson said.

"In accordance with the IT Rules 2021, we have published our report for the month of January 2023. This user-safety report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp's own preventive actions to combat abuse on our platform. (Also Read: 'Baba Elon Musk': Bengaluru Men's Bizarre Puja Hailing Twitter Boss As God Goes Viral; Watch)

As captured in the latest monthly report, WhatsApp banned over 2.9 million accounts in January," he added. From January 1 to January 31, a total of 1,461 reports were received and a total of 195 cases were taken up for action. (Also Read: Latest FD Rates 2023: Check List Of Bank Offering Highest Return On Fixed Deposits)

There were 51 reports on the issue of account support, 1,337 reports on ban appeal, 45 cases on other support cases, and 21 cases on product support. In the company statement, the messaging company said in addition to responding to and actioning on user complaints through the grievance channel, WhatsApp also employs tools and resources to prevent harmful behaviour on the platform.

The company said, "We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred."

The abuse detection operates at three stages of an account`s lifestyle: at registration, during messaging, and in response to negative feedback, which we receive in the form of user reports and blocks."

A team of analysts augments these systems to evaluate edge cases and help improve our effectiveness over time," the company said, adding, "We have detailed our on-platform capabilities to identify and ban accounts in a white paper."