Gossip Herald

Technology

OpenAI reveals strategy to safeguard 2024 election from misinformation

OpenAI outlined an election strategy focused on countering political misuse and misinformation

Javeria Ahmed

OpenAI reveals strategy to safeguard 2024 election from misinformation

OpenAI outlined an election strategy focused on countering political misuse and misinformation

OpenAI reveals strategy to safeguard 2024 election from misinformation
OpenAI reveals strategy to safeguard 2024 election from misinformation

OpenAI, the creator of ChatGPT has unveiled its robust strategy for the 2024 elections to tackle the challenges of digital discourse head-on, prioritizing transparency, accuracy, and ethical considerations to foster a more informed and constructive political landscape.

The upcoming 2024 election cycle is significant on a global scale, extending beyond the borders of the United States. Over 50 nations, representing half of the world's population, are slated to conduct national elections in this pivotal year.

Compounding the challenge, 2024 appears poised to strain resilient democracies and potentially empower leaders with authoritarian tendencies.

The integration of AI into the electoral process adds a layer of concern, raising the specter of potential threats to democratic principles.

On the eve of the Iowa caucus, OpenAI issued a statement outlining its strategy to limit the utilization of AI in shaping electoral outcomes.

The press release stated, "As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.”

Read More: OpenAI allows military integration in latest update

It went on to say, “We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.”

OpenAI has been actively developing systems to counter the emergence of highly realistic AI-generated images, commonly known as 'deepfakes.'

These advanced technologies allow individuals to manipulate photos convincingly, placing political candidates or leaders in various situations, a phenomenon frequently showcased across online platforms.

As per the statement, "DALL·E has guardrails to decline requests that ask for image generation of real people, including candidates.”

Apart from a system designed to thwart deceptive AI-generated images, OpenAI has implemented safeguards to prevent chatbots from impersonating candidates or humans in any capacity.

The company said, "Before releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm.”

Latest News