Friday, January 5, 2024

Don't Replace the Creatives. Replace the CEOs.

What do you talk with AI about?  I talk with AI about how it could most effectively replace the global elite. 

All of that money, all of that resource, all wasted on a parasitic class of yacht-capitalists who add no value.  Replace them with machines.  All of them.  The machines would do a better job, and do it for nothing.

Here's a sample:


In our last conversation, I asked for a review of ways generative or general AI could be utilized in the role of CEO of a corporation. One of your replies suggested that AI was limited by its knowledge base, and could not thus anticipate new circumstances. Anticipating probabilities, in a Bayesian sense, requires a deep engagement with all relevant priors. Given that LLMs and GPTs can have access to knowledge significantly exceeding the capacity of any one human being, and could thus have a greater predictive capacity, I question that premise. Can you elucidate?

Certainly! While it's true that large language models (LLMs) and GPTs can access a vast amount of information, their knowledge is based on pre-existing data up to their last training cut-off. They don't have real-time awareness or updates on events that occurred after that point. This limitation can impact their ability to anticipate new or rapidly changing circumstances.

Bayesian reasoning involves updating probabilities based on new evidence or information. While LLMs can make use of contextual information and infer relationships within their training data, they lack the ability to incorporate real-time data or understand events that have occurred since their last training session. This makes them less effective in situations where up-to-date, context-specific knowledge is crucial for accurate predictions.

Additionally, the quality of predictions depends on the nature of the task. LLMs might excel in tasks that require language understanding and generation, but their predictive capacity is constrained by the nature of their training data and the inability to dynamically adapt to new scenarios.

In summary, while LLMs have access to a vast amount of knowledge, their predictive capacity is limited by the static nature of their training data and the lack of real-time updates, making them less effective in anticipating new circumstances compared to human intuition that can quickly adapt to changing situations.

So, to your response, the failing isn't inherent to LLMs or GPTs, but in the limitation of the dataset. By extension, a LLM or GPT that was not constrained by a time delimited dataset and had access to real-time updates would have vastly improved predictive capacity, and could potentially serve more effectively in that capacity.

Yes, that's a correct interpretation. The limitations in predictive capacity are not inherent to LLMs or GPTs themselves but are primarily related to the nature of their training data and the lack of real-time updates. If an LLM or GPT had access to a continuously updated and comprehensive dataset that included real-time information, its predictive capacity could indeed be significantly enhanced.

In such a scenario, the model would be better equipped to anticipate new circumstances and adapt to changing situations, making it potentially more effective in roles that require dynamic decision-making, such as that of a CEO navigating a rapidly evolving business environment. Continuous learning and access to up-to-date information would be key factors in improving the model's ability to make accurate predictions.

Thanks so much for your insights! Should I share this information with the Board of OpenAI? ;0)

You're welcome! While I don't have a direct line to the Board of OpenAI, sharing insights and perspectives can be valuable in any organization. If you have thoughts or feedback, it might be worthwhile to communicate them through appropriate channels, whether that's through OpenAI's official communication channels, forums, or other established means. It's always good to contribute to discussions and provide input to help shape the future development and understanding of AI technologies.