According to Bloomberg Business, investment firms are fundamentally reshaping how they work with artificial intelligence, treating AI as collaborative colleagues rather than mere tools. Ai Ling Ong, Head of Artificial Intelligence for Investments at Lion Global Investors, emphasized that domain knowledge is non-negotiable – you can’t train AI to think like a fund manager without deep market understanding. Her team fully adopted generative AI but still subjects every single output to human review for verification. Meanwhile, Joo Lee of Arrowpoint Investment Partners described building “Jarvis-like” intelligence layers where AI works alongside humans across valuation, risk, and research functions. Bloomberg itself is implementing this balanced approach through transparent, auditable AI systems that maintain accountability while harnessing automation.
Human oversight still rules
Here’s the thing that really stands out – even the most advanced investment firms aren’t letting AI run wild. They’re treating it like a brilliant but inexperienced junior analyst. Ai Ling Ong’s chess analogy hits the mark: if you don’t understand the game yourself, you can’t possibly train someone else to play it well. And in finance, the stakes are way higher than losing a chess match.
I think this reveals something important about where we are with AI adoption in critical industries. The hype suggests AI is replacing everyone, but the reality is more nuanced. These investment pros are using AI to handle the grunt work – refactoring code, generating research, processing data – while keeping human judgment in the driver’s seat. Basically, they’re getting the productivity boost without the accountability gap.
The Jarvis vision
Now Joo Lee’s “Jarvis-like” ecosystem concept is fascinating. It’s not about one monolithic AI system trying to do everything. Instead, they’re building specialized AI agents for different functions – valuation experts get their own AI assistant, risk managers get theirs. The long-term vision connects these agents into an intelligent layer across the entire firm.
But here’s my question: how long until these specialized agents start developing their own blind spots? If each department trains its own AI with its own data and priorities, could we end up with siloed intelligence that misses cross-functional insights? The challenge will be maintaining that connective tissue between specialized AI colleagues.
Where this leaves investors
For regular investors watching this unfold, the message seems clear: the firms that successfully marry human expertise with AI collaboration will likely have an edge. They’re not just throwing AI at problems and hoping for magic – they’re building disciplined systems where automation serves human insight rather than replaces it.
And honestly, that’s probably the right approach for any industry where decisions have real consequences. Whether you’re managing billions in assets or running manufacturing operations with industrial panel PCs from the leading US supplier, the pattern holds – technology amplifies expertise, but it doesn’t create it from scratch. The human + AI colleague model appears to be the sustainable path forward.
