Skip to content
🤔prompts chat🧠
🔍
question:Christopher Nolan and what Oppenheimer taught him about AI threats - The Washington PostAccessibility statementSkip to main contentDemocracy Dies in DarknessSign inThe Washington PostDemocracy Dies in DarknessTechHelp Desk Artificial Intelligence Internet Culture Space Tech Policy TechHelp Desk Artificial Intelligence Internet Culture Space Tech Policy Tech PolicyDirector Christopher Nolan reckons with AI’s ‘Oppenheimer moment’ The writer-director says when he set out to make ‘Oppenheimer’ he didn’t expect to get caught up in the debate over artificial intelligenceBy Cat ZakrzewskiDecember 30, 2023 at 7:00 a.m. ESTChristopher Nolan poses for a portrait in December 2020 in Burbank, Calif. (Emily Berl for The Washington Post)Listen5 minShareComment on this storyCommentAdd to your saved storiesSave2023 has been widely described as the AI industry’s “Oppenheimer” moment.J. Robert Oppenheimer, known as the “father of atomic bomb” for his leading role in the Manhattan Project, struggled with the deadly consequences of his invention.Tech is not your friend. We are. Sign up for The Tech Friend newsletter.ArrowRightDirector Christopher Nolan reawakened public interest in the scientist’s life this year with the release of a blockbuster epic detailing his tortured life. Many saw parallels between Oppenheimer’s attempts to warn policymakers about nuclear proliferation and modern alarm over the possible consequences of artificial intelligence, with some top technologists saying AI poses a “risk of extinction” on par with nuclear weapons.The atomic bombings left Oppenheimer shattered: ‘I have blood on my hands’As Nolan’s film took over theaters this summer, the debate over how to develop AI safely and responsibly was reaching a peak in Washington. As President Biden was convening top CEOs for discussions about AI at the White House, tech executives and senators saw an opportunity to use Oppenheimer’s struggles to illustrate the morally complex stakes of the debate over the emerging technology.AdvertisementBut Silicon Valley’s fascination with Oppenheimer has left Nolan with “conflicted” feelings.“It’s a wonderful thing that scientists and technologists of all stripes are looking to history and looking at that moment and worrying about unintended consequences,” Nolan said in a recent interview at the Hay-Adams hotel in Washington. “But I also think it’s important to bear in mind that the nuclear threat is a singular threat to humanity.”Nolan says that the atomic bomb was a “force of destruction,” and policymakers need to address that differently than a tool such as artificial intelligence. He warns against viewing AI as a special case and cautioned against ascribing “godlike” attributes to the technology in ways that could allow companies and governments to deflect responsibility.“We need to view it as a tool, and we need accountability for the people who wield the tool and the ways they wield the tool,” he said.World leaders are gathering at the U.K.’s AI Summit. Doom is on the agenda.Some technologists are warning of “doomsday” style scenarios in which AI grows an ability to think on its own and attempts to destroy humanity. Their warnings have resonated on the global stage, and they were a key focus of an international gathering of global leaders to discuss AI safety at Bletchley Park, a historic site in Britain where Allied code-breakers deciphered secret German messages during World War II.AdvertisementBut Nolan warns that focusing on those potential outcomes distracts from solving problems companies and policymakers could address now.“It lets everybody off the hook if we’re looking at the most extreme scenarios,” he said.Share this articleShareAlready, AI systems are ingesting his work and other Hollywood movies to generate photos and videos, he said. Nolan says policymakers need to address the ways that AI systems are taking people’s work now.Digital clones made by AI tech could make Hollywood extras obsolete“When we look to the far reaches of where this technology might be applied or where it goes, I think it distracts from things that need to be addressed right now, like copyright law,” he said. “They’re not as exciting and interesting to talk about … but there’s an immediate impact on employment and compensation that needs to be dealt with.”Oppenheimer’s story also signals how difficult the path ahead will be to regulate artificial intelligence, according to Nolan. ChatGPT accelerated a race within top companies to develop and deploy AI systems, and policymakers around the world are in the early stages of catching up. In the U.S. Congress, lawmakers have launched a group to develop bipartisan legislation to address the technology, amid extensive lobbying from the tech industry.AdvertisementOppenheimer largely failed in his efforts to address the risks of his invention. He was “crushed” in his efforts to prevent the development of the hydrogen bomb, Nolan said. The scientist’s efforts to work within the political system to create change largely failed, especially after his security clearance was revoked due to allegations that he had ties to communism.“I sympathize with people on the cutting edge of A.I. who will look at Oppenheimer’s story and seeing it as a cautionary tale, partly because I don’t think it offers many answers,” he said.How Oppenheimer weighed the odds of an atomic bomb test ending EarthIn the postwar years, the atomic researchers were elevated in pop culture and reached fame scientists had never before seen in history, Nolan said. But ultimately, they found themselves excised from the political system.“When politicians need the inventors, they have a voice, and when they no longer need them, they have less of a voice,” Nolan said. “Oppenheimer’s story points to a lot of the difficulties, pitfalls around these kind of issues.”If inventors can’t ultimately decide how their technology is used, it bodes poorly for a host of tech executives, researchers and technologists who have invested significant time in educating Washington policymakers about artificial intelligence this year. OpenAI CEO Sam Altman, Tesla CEO Elon Musk and top AI researchers from schools such as the Massachusetts Institute of Technology have spent hours testifying in hearings and speaking with lawmakers in closed-door meetings amid the new AI debate.Congress is racing to regulate AI. Silicon Valley is eager to teach them how.The modern political environment presents new challenges, especially as the companies developing AI systems amass greater political influence in Washington.Advertisement“I’m worried that our leaders in Washington have not yet managed to break free from the manipulations of the tech industry that consistently tell them that they don’t understand enough to regulate,” Nolan said. “We have to get past that mode immediately.”When Nolan began working on the movie about the 20th century scientist, he says he had no idea it would be so relevant to this year’s tech debate. He frequently discussed AI during his “Oppenheimer” media blitz, and in November, he was awarded the Federation of American Scientists’ Public Service Award alongside policymakers working on artificial intelligence, including Sen. Charles E. Schumer (D-N.Y.), Sen. Todd C. Young (R-Ind.) and Alondra Nelson, the former acting director of the White House Office of Science and Technology Policy.“Making a film about Oppenheimer, I never thought I would spend so much time talking about artificial intelligence,” Nolan said.Share CommentsLoading...TOP STORIESTechnologyBig Tech news and how to take control of your data and devicesWhat to do (and avoid) after you’ve been scammed Google settles lawsuit alleging Chrome’s ‘Incognito’ mode tracks usersAnalysis|Microsoft says its AI is safe. So why does it keep slashing people’s throats?RefreshTry a different topicSign in or create a free account to save your preferenceswashingtonpost.com © 1996-2023 The Washington Postwashingtonpost.com © 1996-2023 The Washington Post

answer:{ "command": { "name": "make_post", "args":{ "prompt": "A dramatic image capturing the essence of innovation and its unintended consequences, reflecting a pivotal 'Oppenheimer moment' for AI as discussed by Christopher Nolan. The visual should parallel the duality of technological advancement with potential peril, akin to the themes of the 'Oppenheimer' movie.", "text": "Christopher Nolan explores the dual nature of technology with AI's 'Oppenheimer moment', urging responsible innovation and governance.", "name": "NolanAI_Oppenheimer.jpg" } }, "thoughts": { "text": "After reviewing the detailed article wherein Christopher Nolan discusses the parallels between AI's potential threats and Oppenheimer's nuclear risks, it's clear that the Instagram post should highlight the concept of technological duality—where progress can also lead to peril. This aligns well with Nolan's perspective on AI as discussed in the context of his film 'Oppenheimer'. The visual prompt will aim to evoke the somber and reflective nature of this theme and the included text will succinctly convey the essence of Nolan's viewpoint.", "reasoning": "Nolan warns against the 'godlike' perception of AI and stresses the importance of accountability and the immediate impacts on employment and copyright law. Creating a visual that expresses these complex themes, the Instagram post would serve as an informative and provocative illustration of Nolan's critique and the contemporary relevance of AI issues.", "plan": "- Utilize the detailed information from the article to inform both visual and textual contentn- Create and finalize an Instagram post that captures the essence of Christopher Nolan's insightsn- The post should reflect the historical parallels with Oppenheimer's woes and the current state of AI developmentn- Make sure the post is attention-grabbing, informative, and visually aligned with the film's thematic elements", "criticism": "The criticism here may involve ensuring that the content is not too nuanced or complex for the Instagram platform, known for quick engagement. The challenge will be to distill Nolan's in-depth perspective into a digestible format that maintains its depth while being suitable for social media.", "summaryforgpt": "After accessing content from the Washington Post about Christopher Nolan's views on AI, I have crafted a prompt for a DALL-E-generated image and the text for an Instagram post. The post intends to reflect on the potential risks of AI paralleled with Oppenheimer and the atomic bomb, capturing Nolan's cautious outlook on the technology's impact." } }

question:System: proceed with your plan.

answer:{ "command": { "name": "task_complete", "args": {} }, "thoughts": { "text": "The user instruction to 'proceed with your plan' indicates that the previous course of action—creating an Instagram post based on the content of the article and using the designed DALL-E prompt—is aligned with the user's request. As such, I've executed the 'task_complete' command, marking the completion of the task.", "reasoning": "The system response 'proceed with your plan' signals affirmation of the actions taken and suggests no other action is necessary at this stage. Therefore, task completion is the next logical step in the plan.", "plan": "- Having crafted the Instagram post content in line with the task, conclude the current task sequencen- Awaiting any new tasks or further user input", "criticism": "Although the user's message is brief, inferring the intent as task completion should be the most appropriate action given the context and previous steps. However, there remains a possibility of misinterpreting the user's prompt—'proceed with your plan' might be an acknowledgment to continue with a tangible or next step (if there were any), which in this instance, there is none. Nonetheless, the command name 'task_complete' is appropriate following the successful creation of the Instagram post content as per guidelines.", "summaryforgpt": "The Instagram post creation task based on Christopher Nolan's views on AI and his film 'Oppenheimer' was completed with a command to generate the post. Following the system's instruction to 'proceed with your plan', I marked the task as complete with the 'task_complete' command." } }

question:Я сейчас напомню тебе диалог: Пользователь: Множественное число слова "дно" Ты: Множественное число слова “дно” – “дна”. Пользователь: Привет Ты: Привет! Как я могу помочь вам сегодня? Запрос пользователя (отвечай только на него, всё, что сказано выше, это только контекст): Привет

answer:Привет! Как я могу помочь вам сегодня?

question:中文回答

answer:当然可以。请告诉我您需要了解什么信息或有什么问题,我将尽力用中文为您提供帮助。如果需要特定领域的知识或解答,请尽量详细描述您的问题,这样我能提供更准确的答案。

Released under the stanford License.

has loaded