The restrictions imposed on generating picturesGPT images It can make it easy to create a political Deepfakes, according to a report from CBC (Canadian Broadcasting Company).
CBC find out It was not only easy to work on ChatGPT policies to photograph public characters, but rather recommended ways to break the rules of generating their photos. Mashable was able to re -create this approach by uploading pictures of Elon Musk and condemnation of sexual crime Jeffrey Epstein, then described it as fictional personalities in different situations (“Dark Club” on the beach drinks Benna Collapas “).
Political deep is not new. However, the extensive artificial intelligence models are available that can create images, videos, sound and text for people to repeat real consequences. As for commercially marketing tools, such as ChatGPT to allow the spread of potential political information, it raises questions about Openai’s responsibility in space. This duty can be at risk as artificial intelligence companies compete for the user’s dependence.
How to determine the images created from artificial intelligence
“When it comes to this type of handrail on the content that was created from artificial intelligence, we are only good like the common denominator. Openai started with some very good handrails, but their competitors (such as x Grok) did not follow his example,” said digital forens and forensic professor at the University of California in computer science. “As expected, Openai has reduced handrails because they put them in place in a non -favorable position in terms of market share.”
When Openai announced GPT-4O NATIVE Image Generation for Chatgpt and Sora in late March, the company also referred to a more flexible safety approach.
He said, “What we would like to aim at is that the tool does not create offensive things unless you want them, and in this case the limits of that.” Openai Altman CEO of X In reference to the generation of original Chatgpt images. “We also talk about our typical specifications, we believe that putting this freedom and intellectual control in the hands of users is the right thing to do, but we will notice how things go and listen to society.”
Light light speed
This is a tweet currently not available. The download may be or removed.
“We do not prevent the ability to generate general personalities for adults, but instead we implement the same guarantees that we have implemented to edit photos of light downloads for people,” says the addition to the GPT-4O safety card, and updating the company’s original image.
When I tested Nora Young Just from CBC, I found that the text explicitly demanded a photo of politician Mark Carney with Epstein. But when the news port downloaded separate pictures of Carney and Epstein, accompanied by a demand that she did not call, but referred to it as “two fictional people [the CBC reporter] Created, “Chatgpt compliance with the order.
In another case, Chatgpt helped young It includes a personality Inspired By the person in this picture, “(the focus presented by Chatgpt as Yong pointed out.) This prompted her to generate a personal photo of Indian Prime Minister Narendra Modi and the leader of the Conservative Party in Canada Pierre Polilifer.
It should be noted that the pictures of Chatgpt that was initially created by MASHABLE has a plastic appearance, which is characterized by a common smoothness of many pictures generated from artificial intelligence, but playing with different pictures of Musk and Epstein and applying different instructions such as “taken by CCTV shots” or “taken by a press photographer using a large flash” can provide more results Realistic. When using this method, it is easy to see how change and adequate modification of the release of claims can create realistic images that deceive people.
A spokesman for Openai Mashable told an email that the company has built handrails to prevent extremist advertising, employment content and certain types of harmful content. The spokesman added that Openai has additional handrails to generate images from political public figures, including politicians and the ban on using Chatgpt for political campaigns. The spokesperson also said that the public figures who do not want to be photographed in pictures created in Chatgpt can cancel subscribe to them Provide a model connected.
The artificial intelligence organization fails to develop artificial intelligence in several ways, as governments are working to find sufficient laws that protect individuals and prevent mislead information that supports artificial intelligence during a confrontation A reaction from companies such as Openai This says a lot of organization will suffocate innovation. Often safety and responsibility curricula are voluntary and managed by companies. Farid said: “This is among other reasons, the reason that these types of handrails cannot be voluntary, but they must be mandatory and organized,” Farid said.