Wednesday, November 22, 2023

How Not to Be an Accidental Mansplainer: Always Follow the 3 A's Approach to Advice  | Chaos in the Cradle of A.I. | Opinion | Is Ireland Headed for a Merger? | The Invisible War in Ukraine Being Fought Over Radio Waves

View online | Unsubscribe (one-click).
For inquiries/unsubscribe issues, Contact Us














Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng
Learn more about Jeeng


Want to accelerate software development at your company? See how we can help.
Want to accelerate software development at your company? See how we can help.



Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng
Learn more about Jeeng



Don't like ads? Go ad-free with TradeBriefs Premium




Want to accelerate software development at your company? See how we can help.
Want to accelerate software development at your company? See how we can help.

Chaos in the Cradle of A.I. - The New Yorker   

In the 1991 movie “Terminator 2: Judgment Day,” a sentient killer robot travels back in time to stop the rise of artificial intelligence. The robot locates the computer scientist whose work will lead to the creation of Skynet, a computer system that will destroy the world, and convinces him that A.I. development must be stopped immediately. Together, they travel to the headquarters of Cyberdyne Systems, the company behind Skynet, and blow it up. The A.I. research is destroyed, and the course of history is changed—at least, for the rest of the film. (There have been four further sequels.)

In the sci-fi world of “Terminator 2,” it’s crystal clear what it means for an A.I. to become “self-aware,” or to pose a danger to humanity; it’s equally obvious what might be done to stop it. But in real life, the thousands of scientists who have spent their lives working on A.I. disagree about whether today’s systems think, or could become capable of it; they’re uncertain about what sorts of regulations or scientific advances could let the technology flourish while also preventing it from becoming dangerous. Because some people in A.I hold strong and unambiguous views about these subjects, it’s possible to get the impression that the A.I. community is divided cleanly into factions, with one worried about risk and the other eager to push forward. But most researchers are somewhere in the middle. They’re still mulling the scientific and philosophical complexities; they want to proceed cautiously, whatever that might mean.

OpenAI, the research organization behind ChatGPT, has long represented that middle-of-the-road position. It was founded in 2015, as a nonprofit, with big investments from Peter Thiel and Elon Musk, who were (and are) concerned about the risks A.I. poses. OpenAI’s goal, as stated in its charter, has been to develop so-called artificial general intelligence, or A.G.I., in a way that is “safe and beneficial” for humankind. Even as it tries to build “highly autonomous systems that outperform humans at most economically valuable work,” it plans to insure that A.I. will not “harm humanity or unduly concentrate power.” These two goals may very well be incompatible; building systems that can replace human workers has a natural tendency to concentrate power. Still, the organization has sought to honor its charter through a hybrid arrangement. In 2019, it divided itself into two units, one for-profit, one nonprofit, with the for-profit part overseen by the nonprofit part. At least in theory, the for-profit part of OpenAI would act like a startup, focussing on accelerating and commercializing the technology; the nonprofit part would act like a watchdog, preventing the creation of Skynet, while pursuing research that might answer important questions about A.I. safety. The profits and investment from commercialization would fund the nonprofit’s research.

Continued here



Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng
Learn more about Jeeng




Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng
Learn more about Jeeng





Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng

Learn more about Jeeng
Learn more about Jeeng


You are receiving this mailer as a TradeBriefs subscriber.
We fight fake/biased news through human curation & independent editorials.
Your support of ads like these makes it possible. Alternatively, get TradeBriefs Premium (ad-free) for only $2/month
If you still wish to unsubscribe, you can unsubscribe from all our emails here
Our address is 309 Town Center 1, Andheri Kurla Road, Andheri East, Mumbai 400059 - 415237602

No comments:

Post a Comment