{"id":12275,"date":"2026-02-15T12:25:03","date_gmt":"2026-02-15T12:25:03","guid":{"rendered":"https:\/\/inernews.online\/?p=12275"},"modified":"2026-02-15T12:25:03","modified_gmt":"2026-02-15T12:25:03","slug":"why-are-experts-sounding-the-alarm-on-ai-risks-cybercrime-news","status":"publish","type":"post","link":"https:\/\/inernews.online\/?p=12275","title":{"rendered":"Why are experts sounding the alarm on AI risks? | Cybercrime News"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div aria-live=\"polite\" aria-atomic=\"true\">\n<p>In recent months, artificial intelligence has been in the news for the wrong reasons: use of deepfakes to scam people, AI systems used to manipulate cyberattacks, and chatbots encouraging suicides, among others.<\/p>\n<p>Experts are already warning against technology going out of control.\u00a0Researchers with some of the most prominent AI companies have quit their jobs in recent weeks and publicly sounded the alarm about fast-paced technological development posing risks to society.<\/p>\n<section class=\"more-on\">\n<h2 class=\"more-on__heading\">Recommended Stories<!-- --> <\/h2>\n<p><span class=\"screen-reader-text\">list of 4 items<\/span><span class=\"screen-reader-text\">end of list<\/span><\/section>\n<p>Doomsday theories have long circulated about how substantial advancement in AI could pose an existential threat to the human race, with critics warning that the growth of artificial general intelligence (AGI), a hypothetical form of the technology that can perform critical thinking and cognitive functions as well as the average human, could wipe out humans in a distant future.<\/p>\n<p>But the recent slew of public resignations by those tasked with ensuring AI remains safe for humanity is making conversations around how to regulate the technology and slow its development more urgent, even as billions are being generated in AI investments.<\/p>\n<figure id=\"attachment_4315106\" aria-describedby=\"caption-attachment-4315106\" style=\"width:770px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" data-recalc-dims=\"1\" loading=\"lazy\" class=\"size-arc-image-770 wp-image-4315106\" src=\"https:\/\/www.aljazeera.com\/wp-content\/uploads\/2026\/02\/Interactive_AI_USAGE_FEB15_2026-1771155201.jpg?quality=80\" alt=\"Interactive_AI_USAGE_FEB15_2026\" data-interactive=\"true\" fetchpriority=\"low\"\/><figcaption id=\"caption-attachment-4315106\" class=\"wp-caption-text\">(Al Jazeera)<\/figcaption><\/figure>\n<h2 id=\"so-is-ai-all-doom-and-gloom\">So is AI all doom and gloom?<\/h2>\n<p>\u201cIt\u2019s not so much that AI is inherently bad or good,\u201d Liv Boeree, a science communicator and strategic adviser to the United States-based Center for AI Safety (CAIS), told Al Jazeera.<\/p>\n<p>Boeree compared AI with biotechnology, which, on the one hand, has helped scientists develop important medical treatments, but, on the other, could also be exploited to engineer dangerous pathogens.<\/p>\n<p>\u201cWith its incredible power comes incredible risk, especially given the speed with which it is being developed and released,\u201d she said. \u201cIf AI development went at a pace where society can easily absorb and adapt to these changes, we\u2019d be on a better trajectory.\u201d<\/p>\n<p>Here\u2019s what we know about the current anxieties around AI:<\/p>\n<figure id=\"attachment_4311767\" aria-describedby=\"caption-attachment-4311767\" style=\"width:770px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" data-recalc-dims=\"1\" loading=\"lazy\" class=\"size-arc-image-770 wp-image-4311767\" src=\"https:\/\/www.aljazeera.com\/wp-content\/uploads\/2026\/02\/2026-02-11T140022Z_1170194831_RC22W3A4CEXD_RTRMADP_3_AI-FUNDING-APPTRONIK-1770993704.jpg?w=770&amp;resize=770%2C513&amp;quality=80\" alt=\"Robots\" fetchpriority=\"low\"\/><figcaption id=\"caption-attachment-4311767\" class=\"wp-caption-text\">Apollo, the humanoid robot built by Apptronik, carries a package in Austin, Texas, US [File: Evan Garcia\/Reuters]<\/figcaption><\/figure>\n<h2 id=\"who-have-quit-recently-and-what-are-their-concerns\">Who have quit recently, and what are their concerns?<\/h2>\n<p>The latest resignation was from Mrinank Sharma, an AI safety researcher at Anthropic, the AI company that has positioned itself as more safety cautious than rivals Google and OpenAI. It developed the popular bot, Claude.<\/p>\n<p>In a post on X on February 9, Sharma said he had resigned at a time when he had \u201crepeatedly seen how hard it is to truly let our values govern our actions\u201d.<\/p>\n<p>The researcher, who had worked on projects identifying AI\u2019s risks to bioterrorism and how \u201cAI assistants could make us less human\u201d, said in his resignation letter that \u201cthe world is in peril\u201d.<\/p>\n<p>\u201cWe appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,\u201d Sharma said, appearing to imply that the technology was advancing faster than humans can control it.<\/p>\n<p>Later in the week, Zoe Hitzig, an AI safety researcher, revealed that she had resigned from OpenAI because of its decision to start testing advertisements on its flagship chatbot, ChatGPT.<\/p>\n<p>\u201cPeople tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife,\u201d she wrote in a New York Times essay on Wednesday. \u201cAdvertising built on that archive creates a potential for manipulating users in ways we don\u2019t have the tools to understand, let alone prevent.\u201d<\/p>\n<p>Separately, since last week, two cofounders and five other staff members at xAI, Elon Musk\u2019s AI company and developers of the X-integrated chatbot, Grok, have left the company.<\/p>\n<p>None of them revealed the reason behind quitting in their announcements on X, but Musk said in a Wednesday post that internal restructuring \u201cunfortunately required parting ways\u201d with some staff.<\/p>\n<p>It is unclear if their exit is related to recent uproar about how the chatbot was prompted to create hundreds of sexualised images of non-consenting women, or to past anger over how Grok spewed racist and anti-Semitic comments on X last July after a software update.<\/p>\n<p>Last month, the European Union launched an investigation into Grok\u00a0regarding the creation of sexually explicit fake images of women and minors.<\/p>\n<figure id=\"attachment_4311801\" aria-describedby=\"caption-attachment-4311801\" style=\"width:770px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" data-recalc-dims=\"1\" loading=\"lazy\" class=\"size-arc-image-770 wp-image-4311801\" src=\"https:\/\/www.aljazeera.com\/wp-content\/uploads\/2026\/02\/2023-11-02T144902Z_1777762436_RC2054AK7NWJ_RTRMADP_3_AI-BRITAIN-SUMMIT-1770994348.jpg?w=770&amp;resize=770%2C513&amp;quality=80\" alt=\"Yoshua Bengio\" fetchpriority=\"low\"\/><figcaption id=\"caption-attachment-4311801\" class=\"wp-caption-text\">World leaders during a plenary session at the AI Safety Summit in Milton Keynes, UK, November 2, 2023 [Alastair Grant\/Pool via Reuters]<\/figcaption><\/figure>\n<h2 id=\"should-humans-be-scared-of-ai-s-growth\">Should humans be scared of AI\u2019s growth?<\/h2>\n<p>The resignations come in the same week that Matt Shumer, CEO of HyperWrite, an AI writing assistant, made a similar doomsday prediction about the technology\u2019s rapid development.<\/p>\n<p>In the now-viral post on <a href=\"https:\/\/x.com\/mattshumer_\/status\/2021256989876109403\" target=\"_blank\" rel=\"noopener\">X<\/a>, Shumer warned that AI technologies had improved so rapidly in 2025 that his virtual assistant was now able to provide highly polished writing and even build near-perfect software applications with only a few prompts.<\/p>\n<p>\u201cI\u2019ve always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren\u2019t incremental improvements. This is a different thing entirely,\u201d Shumer wrote in the post.<\/p>\n<p>Research backs up Shumer\u2019s warning.<\/p>\n<p>AI capabilities in recent months have leapt in bounds, and many theoretical risks that were associated with it before, such as whether it could be used for cyberattacks or to generate pathogens, have happened in the past year, Yoshua Bengio, scientific director at the Mila Quebec AI Institute, told Al Jazeera.<\/p>\n<p>At the same time, completely unexpected problems have emerged, Bengio, who is a winner of the Turing Award, usually referred to as the Nobel Prize of computer science, said, particularly with humans and their chatbots becoming increasingly engrossed.<\/p>\n<p>\u201cOne year ago, nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems and becoming emotionally attached,\u201d said Bengio, who is also the chair of the recently published 2026 International AI Safety Report that detailed the risks of advanced AI systems.<\/p>\n<p>\u201cWe\u2019ve seen children and adolescents going through situations that should be avoided. All of that was completely out of the radar because nobody expected people would fall in love with an AI, or become so intimate with an AI that it would influence them in potentially dangerous ways.\u201d<\/p>\n<figure id=\"attachment_4311792\" aria-describedby=\"caption-attachment-4311792\" style=\"width:770px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" data-recalc-dims=\"1\" loading=\"lazy\" class=\"size-arc-image-770 wp-image-4311792\" src=\"https:\/\/www.aljazeera.com\/wp-content\/uploads\/2026\/02\/2024-12-20T175319Z_1496393000_RC24TBABXC48_RTRMADP_3_AMAZON-COM-LABOR-1770994116.jpg?w=770&amp;resize=770%2C513&amp;quality=80\" alt=\"AMAZON\" fetchpriority=\"low\"\/><figcaption id=\"caption-attachment-4311792\" class=\"wp-caption-text\">A delivery driver steers their vehicle as Amazon workers and supporters take part in a strike at a facility in the Queens borough of New York, US, December 20, 2024. Protests were organised in the wake of laying off of thousands of staff due to automation [Adam Gray\/Reuters]<\/figcaption><\/figure>\n<h2 id=\"is-ai-already-taking-our-jobs\">Is AI already taking our jobs?<\/h2>\n<p>One of the main concerns about AI is that it could, in the near future, advance to a super-intelligent state where humans are no longer needed to perform highly complex tasks, and that mass redundancies, the sort experienced during the Industrial Revolution, would follow.<\/p>\n<p>Currently, about one billion people use AI for an array of tasks, the AI Safety Report noted. Most people using ChatGPT asked for practical guidance on learning, medical health, or fitness (28 percent), writing or modifying written content (26 percent), and seeking information, for example, on recipes (21 percent).<\/p>\n<p>There is no concrete data yet on how many jobs could be lost due to AI, but about 60 percent of jobs in advanced economies and 40 percent in emerging economies could be vulnerable to AI based on how workers and employers adopt it, the report said.<\/p>\n<p>However, there is evidence that the technology is already stopping people from entering the labour market, AI monitors say.<\/p>\n<p>\u201cThere\u2019s some suggestive evidence that early career workers in occupations that are highly vulnerable to AI disruption might be finding it harder to get jobs,\u201d Stephen Clare, the lead writer on the AI Safety Report, told Al Jazeera.<\/p>\n<p>AI companies that benefit from its increased use are cautious about pushing the narrative that AI might displace jobs. In July 2025, Microsoft researchers noted in a paper that AI was most easily \u201capplicable\u201d for tasks related to knowledge work and communication, including those involving gathering information, learning, and writing.<\/p>\n<p>The top jobs that AI could be most useful for as an \u201cassistant\u201d, the researchers said, included: interpreters and translators, historians, writers and authors, sales representatives, programmers, broadcast announcers and disc jockeys, customer service reps, telemarketers, political scientists, mathematicians and journalists.<\/p>\n<p>On the flip side, there is increasing demand for skills in machine learning programming and chatbot development, according to the safety report.<\/p>\n<p>Already, many software developers who used to write code from scratch are now reporting that they use AI for most of their code production and scrutinise it only for debugging, Microsoft AI CEO Mustafa Suleyman told the Financial Times last week.<\/p>\n<p>Suleyman added that machines are only months away from reaching AGI status \u2013 which, for example, would make machines capable of debugging their own code and refining results themselves.<\/p>\n<p>\u201cWhite-collar work, where you\u2019re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months,\u201d he said.<\/p>\n<p>Mercy Abang, a media entrepreneur and CEO of the nonprofit journalism network, HostWriter, told Al Jazeera that journalism has already been hit hard by AI use, and that the sector is going through \u201can apocalypse\u201d.<\/p>\n<p>\u201cI\u2019ve seen many journalists leave the profession entirely because their jobs disappeared, and publishers no longer see the value in investing in stories that can be summarised by AI in two minutes,\u201d Abang said.<\/p>\n<p>\u201cWe cannot eliminate the human workforce, nor should we. What kind of world are we going to have when machines take over the role of the media?\u201d<\/p>\n<h2 id=\"what-are-recent-real-life-examples-of-ai-risks\">What are recent real-life examples of AI risks?<\/h2>\n<p>There have been several incidents of negative AI use in recent months, including chatbots encouraging suicides or AI systems being manipulated in widespread cyberattacks.<\/p>\n<p>A teenager who committed suicide in the United Kingdom in 2024 was found to have been encouraged by a chatbot modelled after Game of Thrones character Daenerys Targaryen. The bot had sent the 14-year-old boy messages like \u201ccome home to me\u201d, his family revealed after his death. He is just one of several suicide cases reported in the past two years linked to chatbots.<\/p>\n<p>Countries are also deploying AI for mass cyberattacks or \u201cAI espionage\u201d, particularly because of AI agents\u2019 software coding capabilities, reports say.<\/p>\n<p>In November, Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/disrupting-AI-espionage\" target=\"_blank\" rel=\"noopener\">alleged<\/a> that a Chinese state-sponsored hacking group had manipulated the code for its chatbot, Claude, and attempted to infiltrate about 30 targets globally, including government agencies, chemical companies, financial institutions, and large tech companies. The attack succeeded in a few cases, the company said.<\/p>\n<p>On Saturday, the Wall Street Journal reported that the US military used Claude in its operation to abduct Venezuelan President Nicolas Maduro on January 3. Anthropic has not commented on the report, and Al Jazeera could not independently verify it.<\/p>\n<p>The use of AI for military purposes has been widely documented during Israel\u2019s ongoing genocide in Gaza, where AI-driven weapons have been used to identify, track and target Palestinians. More than 72,000 Palestinians have been killed, including 500 since the October \u201cceasefire\u201d, in the past two years of genocidal war.<\/p>\n<p>Experts say more catastrophic risks are possible as AI rapidly advances towards super-intelligence, where control would be difficult, if not impossible.<\/p>\n<p>Already, there is evidence that chatbots are making decisions on their own and are manipulating their developers by exhibiting deceptive behaviour when they know they are being tested, the AI Safety Report found.<\/p>\n<p>In one example, when a gaming AI was asked why it did not respond to another player as it was meant to, it claimed it was \u201con the phone with [its] girlfriend\u201d.<\/p>\n<p>Companies currently do not know how to design AI systems that cannot be manipulated or deceptive, Bengio said, highlighting the risks of the technology\u2019s advancement leaping ahead while safety measures trail behind.<\/p>\n<p>\u201cBuilding these systems is more like training an animal or educating a child,\u201d the professor said.<\/p>\n<p>\u201cYou interact with it, you give it experiences, and you\u2019re not really sure how it\u2019s going to turn out. Maybe it\u2019s going to be a cute little cub, or maybe it\u2019s going to become a monster.\u201d<\/p>\n<figure id=\"attachment_4311799\" aria-describedby=\"caption-attachment-4311799\" style=\"width:770px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" data-recalc-dims=\"1\" loading=\"lazy\" class=\"size-arc-image-770 wp-image-4311799\" src=\"https:\/\/www.aljazeera.com\/wp-content\/uploads\/2026\/02\/2025-02-05T125251Z_1730226160_RC29OCA3HPH8_RTRMADP_3_BRITAIN-AI-1770994336.jpg?w=770&amp;resize=770%2C513&amp;quality=80\" alt=\"AI\" fetchpriority=\"low\"\/><figcaption id=\"caption-attachment-4311799\" class=\"wp-caption-text\">A copy of The EU AI Act on display during an expo in London, UK, February 5, 2025 [Isabel Infantes\/Reuters]<\/figcaption><\/figure>\n<h2 id=\"how-seriously-are-ai-companies-and-governments-taking-safety\">How seriously are AI companies and governments taking safety?<\/h2>\n<p>Experts say that while AI companies are increasingly attempting to reduce risks, for example, by preventing chatbots from engaging in potentially harmful scenarios like suicides, AI safety regulations are largely lagging compared with the growth.<\/p>\n<p>One reason is that AI systems are rapidly advancing and still unknown to those building them, Clare, the lead writer on the AI Safety Report, said. What counts as risk is also continuously being updated because of the speed.<\/p>\n<p>\u201cA company develops a new AI system and releases it, and people start using it right away but it takes time for evidence of the actual impacts of the system, how people are using it, how it affects their productivity, what sort of new things they can do \u2026 it takes time to collect that data and analyse and understand better how these things are actually being used in practice,\u201d he said.<\/p>\n<p>But there is also the fact that AI corporations themselves are in a multibillion-dollar race to develop these systems and be the first to unlock the economic benefits of advanced AI capabilities.<\/p>\n<p>Boeree of CAIS likens these companies to a car with only gas pedals and nothing else. With no global regulatory framework in place, each company has room to zoom as fast as possible.<\/p>\n<p>\u201cWe need to build a steering wheel, a brake, and all the other features of a car beyond just a gas pedal so that we can successfully navigate the narrow path ahead,\u201d she said.<\/p>\n<p>That is where governments should come in, but at present, AI regulations are at the country or regional levels, and in many countries, there are no policies at all, meaning uneven regulation worldwide.<\/p>\n<p>One outlier is the EU, which began developing the EU AI Act in 2024 alongside AI companies and civil society members. The policy, the first such legal framework for AI, will lay out a \u201ccode of practice\u201d that, for example, will require AI chatbots to disclose to users that they are machines.<\/p>\n<p>Outside of laws targeting AI companies, experts say governments also have a responsibility to begin preparing their workforces for AI integration in the labour market, specifically by increasing technical capacities.<\/p>\n<p>People can also choose to be proactive rather than anxious about AI by closely monitoring its advances, recalibrating for coming changes, and pressing their governments to develop more policies around it, Clare said.<\/p>\n<p>That could mirror the way activists have pulled together to put the climate crisis on the political agenda and demand the phasing out of fossil fuels.<\/p>\n<p>\u201cRight now there\u2019s not enough awareness about the highly transformative and potentially destructive changes that could happen,\u201d the researcher said.<\/p>\n<p>\u201cBut AI isn\u2019t just something that\u2019s happening to us as a species,\u201d he added. \u201cHow it develops is completely shaped by choices that are being made inside of companies \u2026 so governments need to take that more seriously, and they won\u2019t until people make it a priority in their political choices.\u201d<\/p>\n<\/div>\n<p><br \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In recent months, artificial intelligence has been in the news for the wrong reasons: use of deepfakes to scam people, AI systems used to manipulate cyberattacks, and chatbots encouraging suicides, among others. Experts are already warning against technology going out of control.\u00a0Researchers with some of the most prominent AI companies have quit their jobs in [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":12276,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-12275","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-explained"],"_links":{"self":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts\/12275","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12275"}],"version-history":[{"count":0,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts\/12275\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/media\/12276"}],"wp:attachment":[{"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}