top of page

Weird AI News

  • Mark Dworkin
  • Sep 1, 2024
  • 3 min read

Updated: May 9



ree

A former Google worker and tech pioneer known as the “Godfather of AI” has issued a warning about the rise of AI that will affect millions.

     

Professor Geoffrey Hinton who left Google last year after he admitted to regretting his work in the field of AI. 

     

The former Google Tech Developer, who you would think would know a thing or two about AI, is worried that too many mundane jobs will disappear because AI will simply be able to do them. 

     

“I am very worried about AI taking over lots of mundane jobs,” he told BBC’s Newsnight. “That should be a good thing. It’s going to lead to a big increase in productivity, which leads to a big increase in wealth, and if that wealth was equally distributed that would be great, but it’s not going to be,” Hinton went on. “I certainly believe in a universal basic income, but I don’t think that’s enough because people get their self-respect from the jobs they do. If you pay everyone a universal basic income, that solves the problem of them starving and not being able to pay their rent, but that doesn’t solve the self-respect problem.”

     

Hinton guessed that between five and twenty years from now, we’ll quite possibly have to confront the problem of AI trying to take over. “That might lead to an extinction-level threat due to humans having created a form of intelligence that is just better than biological intelligence…That is very worrying for us.” 


                *****************


Researchers at the University of Cambridge have issued an important warning over emerging AI tech that could allow people to “speak to the dead.” The tool could allow users to hold text and voice conversations with lost loved ones. In a recently released paper entitled “Digital Afterlife: Call for safeguards to prevent unwanted hauntings by AI chatbots of dead loved ones” researchers warn about the wider issues with talking to the dead.

     

“Some companies are already offering these services, providing an entirely new type of postmortem presence,” the paper claims.

     

Dr. Katarzyna Nowaczyk-Basinska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence, emphasized why tools like this can prove dangerous and advised caution within the industry.

     

“This area of AI is an ethical minefield. It’s important to prioritize the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services,” she stated. “At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

     

Co-author Dr. Tomasz Hollanek suggested it would be crucial to implement a system that ensures the individual can eventually cut ties with the digital person, possibly holding a funeral.  

     

“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have  to interact with the simulation,” Dr. Hollanel said. “These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”


                ****************

     

A new study suggests there is a course for concern when it comes to AI due to how quickly it learns and applies that learnt knowledge. Researchers of the study, which was published in  the journal “Patterns,” indicated AI systems have already shown they are capable of deceiving humans. 

“Large language models and other AI systems have already learned, from their training, have the ability to deceive via techniques of manipulation, sycophancy and cheating the safety test,” the study stated. “And their attempts are only getting better. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks such as losing control of AI systems. 

     

The researchers offered various solutions to safeguard the emerging technology.

     

“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception. It is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.” 


Subscribe to our newsletter • Don’t miss out!

St. Croix Times
St. Croix Times

LIFESTYLE  MAGAZINE

St. Croix Times

MD Publications 

Publisher/Editor:  M.A. Dworkin

Phone:  340-204-0237
Email:  info@stcroixtimes.com

© 2024 ST. Croix Times - All rights reserved

bottom of page