![]() Whatever the cause is, this current era is slowly giving way to what comes next, but in a plaintive, melancholy fashion. ![]() The season is coming to an end, and while there’s no clear explanation as to what length of time a season is, you learn that their end is punctuated by the demise of previous regimes, the rise of illness or a destructive war. It has been updated to reflect that while the group has received funds from Musk, he is not its largest donor.You take the role of Estelle, a young woman venturing out from her secluded mountain village for the very first time. The original version of this story stated that the Future of Life Institute (FLI) was primarily funded by Elon Musk. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters. “If we cite someone, it just means we claim they’re endorsing that sentence. “There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”Īsked to comment on the criticism, FLI’s president, Max Tegmark, said both short-term and long-term risks of AI should be taken seriously. She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.” ![]() Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks. ![]() Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”. “Ignoring active harms right now is a privilege that some of us don’t have.” “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. ![]() Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”. When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.Ĭritics have accused the Future of Life Institute (FLI), which has received funding from the Musk foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.Īmong the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. But four experts cited in the letter have expressed concern that their research was used to make such claims. The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |