You Judge
You may find this special issue of New Scientist: The AI Revolution (requires subscription) helpful as "you judge"...
"...First, these systems are not yet reliable. Nor are they conscious. They’re not deciding to do anything that might be harmful to us. The real potential for harm is in humans using them, and therefore we do need to regulate them.
Second, just because we don’t understand precisely how they work yet doesn’t mean they’re magic. It’s just that they’re very complex. We will be able to understand them. We just need to do the science, and to do the science we need these systems not to be entirely in the hands of for-profit corporations...
Third, ..."
--Melanie Mitchell, New Scientist 22 April 2023 p49--
RES:
The Right Way to Regulate AI by Alondra Nelson (download pdf)
Web:
I wonder if advanced aging senior "deep neural networks" will conduct a reflective retirement review of their lifetime work...probably not without a memory.
PopSci, ‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech
Other AI-Related Discussions and Information:
arXiv, KAN: Kolmogorov-Arnold Networks (pdf 12.2Mb)
SF Pres Club, AI + Journalism (Vimeo) [Moderator - Rachel Metz]
Current Biology, Primer on Neural network models and deep learning, Apr 2019 (article includes a downloadable link to pdf [406Kb], which includes a further reading section)
“...The problem is that the task of prediction is not equivalent to the task of understanding,” said Allyson Ettinger, a computational linguist at the University of Chicago..."
Nature, Synthetic Data in Machine Learning for Medicine and Healthcare and NAM, Toward a Code of Conduct for Artificial Intelligence Used in Health, Medical Care, and Health Research
AI-Related Applications:
Last Updated December 16, 2024; added links to Tristan Harris' short (20 mins) talk at AI for Good Global Summit 2024 and ITU, AI for Good, Deloitte Impact Report
No comments:
Post a Comment