AI Hallucinations Can Be Solved, Says Nvidia’s Jensen Huang, Predicts Artificial General Intelligence Will Emerge Within 5 Years

by

in

1. Artificial General Intelligence (AGI) is seen as a significant advancement in AI, capable of performing various cognitive tasks at or above human levels.
2. Concerns about AGI revolve around its unpredictability in decision-making and potential lack of alignment with human values and priorities.
3. Nvidia’s CEO, Jensen Huang, suggests that defining specific tests for AGI could lead to its achievement within five years, and proposes a solution for preventing AI hallucinations by ensuring well-researched answers.

Artificial General Intelligence (AGI) is a significant advancement in artificial intelligence that is capable of performing a broad spectrum of cognitive tasks at or above human levels. Nvidia’s CEO Jensen Huang recently addressed the press about AGI at the GTC developer conference, expressing frustration at the frequency of questions he receives on the subject. The concept of AGI raises existential questions about the control and impact of machines that can outthink humans in various domains, leading to concerns about unpredictable decision-making processes and potential lack of human values alignment.

When asked about a timeframe for AGI, Huang emphasized the need for specific definitions and tests to measure its capabilities. He proposed that AGI could be achieved within five years if specific tests such as legal bar exams, logic tests, or passing pre-med exams are used as benchmarks. However, he stressed the importance of being clear about what AGI means in each context before making predictions.

Regarding AI hallucinations, when some AIs provide answers that sound plausible but lack factual basis, Huang suggested a solution called ‘Retrieval-augmented generation.’ This approach involves requiring the AI to research and verify answers before providing them. For critical answers like health advice, he recommended checking multiple sources to ensure accuracy. Providing the option for the AI to admit when it does not have the answer or cannot reach a consensus is essential in ensuring reliable information.

Source link