Infusing Deep Expertise into Generative AI with Knowledge Elicitation

I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional whose groundbreaking work in artificial intelligence, machine learning, and blockchain has positioned him as a leading voice in the field. With a passion for harnessing cutting-edge technologies to solve real-world challenges, Dominic has been at the forefront of using knowledge elicitation techniques to transform generative AI and large language models into domain-specific powerhouses. Today, we’ll dive into his innovative approaches to uncovering hidden expertise, blending old-school methods with modern AI tools, and shaping the future of specialized intelligence in areas like stock trading. Our conversation will explore the intricacies of working with human experts, the role of AI in surfacing unspoken rules, and the ongoing debate around synthetic versus human expertise.

How did you first approach a domain expert like the stock trader to start uncovering their hidden rules, and what was a memorable challenge you faced during those initial conversations?

I started by building a rapport with the stock trader, just sitting down for a casual chat over coffee to understand their background and get a feel for how they think about their craft. I didn’t dive straight into technical questions; instead, I asked them to walk me through some of their past trades, focusing on why they made specific decisions. One challenge that really stood out was when I noticed they were hesitant to share the real reasoning behind certain picks—there was this moment when they glossed over a failed trade with a generic “market conditions” excuse. I could sense they were holding back, maybe out of fear of looking less skilled, so I had to gently probe deeper by asking about their emotional state during that trade, which eventually led to them revealing a gut-based rule they hadn’t articulated before. It was a slow process, but that breakthrough taught me the importance of patience and creating a safe space for honesty.

Can you walk us through how you guide an expert to reveal unspoken strategies, like the Sector Rotation Rule, and share a moment when a rule genuinely surprised you?

Guiding an expert to reveal unspoken strategies is like peeling back layers of an onion—you start broad and slowly get more specific. With the stock trader, I’d ask them to narrate their thought process for specific trades, focusing on patterns like why they favored tech stocks over energy at a given time, and I’d jot down every nuance they mentioned about sector trends. For the Sector Rotation Rule, which involves favoring stocks in a sector outperforming the market for two months unless macroeconomic indicators warn of contraction, I was floored when they casually mentioned tracking capital inflows as a key trigger. It surprised me because it wasn’t just about raw data—it was this intuitive blend of numbers and market vibe they’d internalized over years, something no textbook would teach. I remember sitting there, scribbling furiously, feeling like I’d struck gold because it showed how much of their expertise lived in these subtle, unwritten instincts.

How do you balance human-to-human and AI-to-human interactions during knowledge elicitation, and can you share an example of how AI helped uncover a rule like the Market Sentiment Rule?

Balancing human-to-human and AI-to-human interactions is all about leveraging the strengths of each. I usually start with personal conversations to build trust and get a baseline of the expert’s thought process, like I did with the stock trader by discussing their historical trades face-to-face. Then, I bring in the AI, using tools like ChatGPT to engage the expert in follow-up dialogues where it can ask probing questions or echo back rules for verification. A standout moment was when the AI uncovered the Market Sentiment Rule—if social and news sentiment is overly positive and the stock price jumps over 10% in a week, avoid entry for five trading days due to a potential hype cycle. I hadn’t picked up on this during my initial talks, but the AI, through its structured prompts, noticed the trader’s hesitance around hyped stocks and dug deeper, pulling out this gem. It was incredible to see the AI act almost like a detective, piecing together clues I’d missed, and it reinforced my belief in using both approaches together.

When it comes to using data to detect patterns, like with the Stop-Loss Discipline Rule, how do you prepare that data for AI analysis, and what was a surprising insight that emerged?

Preparing data for AI analysis is a meticulous process that starts with collecting raw information relevant to the expert’s decisions. For the stock trader, I gathered datasets including Trade ID, Date of Trade, Stock Ticker, Price, EPS Growth, P/E ratio, Sector Trend, Sentiment, and more, ensuring everything was structured in a clean, tabular format so the AI could parse it easily. I then fed this into the LLM with instructions to look for patterns behind trade actions, comparing them to already elicited rules. A surprising insight was the Stop-Loss Discipline Rule—if a stock drops more than 8% below purchase price, sell automatically, no matter the outlook. Initially, the trader didn’t consciously acknowledge using this rule, but after the AI flagged it and we discussed it over a glass of wine, they admitted it was a subconscious safety net they’d developed. I was taken aback by how the AI spotted this hidden consistency in their behavior, and it felt like uncovering a buried treasure that even the expert hadn’t fully recognized.

How do you build trust with an expert who might be guarded about sharing their true methods, and can you recall a specific moment where you had to dig deeper?

Building trust with an expert is all about showing genuine curiosity and respect for their craft, while also being transparent about why I’m asking certain questions. I make it clear that my goal isn’t to judge or expose them, but to learn and help codify their brilliance into something scalable, like with the stock trader where I emphasized how their rules could enhance AI tools. A specific moment that sticks out is when I sensed they were giving me surface-level answers about why they avoided a particular stock, citing vague “intuition.” I had to dig deeper by asking about the context of that decision—what they were reading, who they talked to, even how they felt that day—and after some gentle nudging, they revealed a rule tied to avoiding overhyped stocks based on news sentiment. It was a tense few minutes, feeling like I was walking on eggshells, but once they opened up, it was like a dam broke, and I could see the relief in their eyes for finally articulating something they’d kept under wraps.

Your use of tools like ChatGPT to codify rules such as the Earnings Momentum Rule is fascinating. How do you decide which rules to input first, and what was a moment when the AI’s application of a rule really impressed you?

Deciding which rules to input into an AI like ChatGPT starts with prioritizing those that seem most foundational to the expert’s decision-making process. For the stock trader, I began with the Earnings Momentum Rule—if a company shows at least three consecutive quarters of earnings growth with an accelerating rate, consider it a buy unless the P/E ratio exceeds 30—because it underpinned many of their core strategies. I input it via a structured prompt, ensuring the AI understood the conditions and exceptions clearly. A moment that really impressed me was when I tested the AI by presenting a hypothetical stock scenario, and it not only flagged the stock as a buy candidate based on the rule but also explained its reasoning with a breakdown of the growth trend and P/E threshold. Seeing the AI mimic the trader’s logic so precisely gave me a rush of excitement—it was like watching a student ace a test after months of coaching, and it validated the whole elicitation process in a tangible way.

How do you structure follow-up conversations with experts to verify rules, and can you share a story of a rule that needed tweaking after feedback?

Structuring follow-up conversations to verify rules involves a mix of reflection and real-world testing. I sit down with the expert, like the stock trader, and walk them through each rule as codified in the AI, asking if it matches their intent and testing it against recent or hypothetical trades to see if it holds up. I also encourage them to point out edge cases where the rule might fail, keeping the tone collaborative so they feel ownership over the process. One story that comes to mind is with the Market Sentiment Rule—initially, the AI had it as avoiding entry for three trading days after a hype-driven price jump of over 10%, but the trader pointed out during a follow-up that five days was more realistic based on past hype cycles they’d observed. We adjusted it together over a late-night discussion, and I could feel their pride in refining something they’d helped create, which made the tweak feel less like a correction and more like a shared victory.

There’s a lot of debate around synthetic experts versus human experts in AI. How do you view the potential of LLMs to match human expertise in fields like stock trading, and what experience shaped your perspective?

I think LLMs have incredible potential to match human expertise in narrow domains like stock trading, but they’re not quite there yet in replicating the full depth of human intuition and adaptability. These models can codify and apply rules with precision, as I’ve seen with rules like the Earnings Momentum Rule in ChatGPT, but they lack the emotional and contextual nuance humans bring—like a trader’s gut feeling during a market crash. A defining experience for me was watching the AI suggest a trade based on a rule, only for the stock trader to override it because of a sudden geopolitical event the AI couldn’t weigh. It hit me then that while synthetic experts can be powerful tools, they’re best as partners to humans, not replacements, at least until we reach something closer to artificial general intelligence. I’m optimistic, though, that with continued knowledge elicitation, we’re bridging that gap day by day, and it’s thrilling to be part of that journey.

How do you adapt classic knowledge elicitation techniques from the expert systems era for today’s LLMs, and can you share a specific instance where this blend worked well with the stock trader?

Adapting classic knowledge elicitation techniques for modern LLMs involves taking those foundational methods—like verbalization protocols and problem-solving walkthroughs—and pairing them with AI’s ability to scale and analyze patterns. Back in the expert systems era, we relied heavily on manual note-taking and iterative interviews, but now I use AI to record, structure, and even prompt follow-up questions during elicitation. With the stock trader, a specific instance where this blend shone was when I used the old-school “speaking aloud” technique to have them narrate their thought process for a trade, then fed those raw insights into ChatGPT to look for underlying rules. The AI helped refine vague statements into something concrete like the Sector Rotation Rule, and when I played it back to the trader, their eyes lit up with recognition—it was like seeing their own mind reflected back, but clearer. That synergy between classic human interaction and AI’s processing power felt electric, and it showed me how far we can push these old methods with new tools.

Looking ahead, what is your forecast for the future of knowledge elicitation and generative AI in creating domain-specific expertise?

I believe the future of knowledge elicitation and generative AI is incredibly bright, with the potential to create hyper-specialized tools that rival or even surpass human experts in narrow fields. We’re moving toward a world where AI can not only codify existing expertise but also dynamically evolve rules by continuously learning from new data and expert feedback, much like how we added rules like the Stop-Loss Discipline Rule during my work with the stock trader. I foresee a hybrid model where human intuition and AI precision work hand-in-hand, democratizing expertise so that even small businesses or individuals can access top-tier insights in domains like finance or medicine. There will be challenges, especially around ethics and ensuring trust in synthetic experts, but I’m excited to see AI become a true collaborator, amplifying human potential in ways we’re just starting to imagine.

Explore more

Why Gen Z Won’t Stay and How to Change Their Mind

Many hiring managers are asking themselves the same question after investing months in training and building rapport with a promising new Gen Z employee, only to see them depart for a new opportunity without a second glance. This rapid turnover has become a defining workplace trend, leaving countless leaders perplexed and wondering where they went wrong. The data supports this

Fun at Work May Be Better for Your Health Than Time Off

In an era where corporate wellness programs often revolve around subsidized gym memberships and mindfulness apps, a far simpler and more potent catalyst for employee health is frequently overlooked right within the daily grind of the workday itself. While organizations invest heavily in helping employees recover from work, groundbreaking insights suggest a more proactive approach might yield better results. The

Daily Interactions Determine if Employees Stay or Go

Introduction Many organizational leaders are caught completely off guard when a top-performing employee submits their resignation, often assuming the departure is driven by a better salary or a more prestigious title elsewhere. This assumption, however, frequently misses the more subtle and powerful forces at play. The reality is that an employee’s decision to stay, leave, or simply disengage is rarely

Why Is Your Growth Strategy Driving Gen Z Away?

Despite meticulously curated office perks and well-intentioned company retreats designed to boost morale, a significant number of organizations are confronting a silent exodus as nearly half of their Generation Z workforce quietly considers resignation. This trend is not an indictment of the coffee bar or flexible hours but a glaring symptom of a much deeper, systemic issue. The core of

New Study Reveals the Soaring Costs of Job Seeking

What was once a straightforward process of submitting a resume and attending an interview has now morphed into a financially and emotionally taxing marathon that can stretch for months, demanding significant out-of-pocket investment from candidates with no guarantee of a return. A growing body of evidence reveals that the journey to a new job is no longer just a test