KMWorld - KM & AI Bring Collectivity, Nostalgia, & Selectivity
1.
A204. KM &AI Bring Collectivity, Nostalgia, & Selectivity
Three Behaviors and Case Studies for Knowledge Professionals in the AI World
Wednesday, November 19 • 2:30 – 3:15 PM
2.
Three Behaviors and
CaseStudies for
Knowledge
Professionals in the
AI World
Katrina Pugh, Ph.D.
Marc Solomon
Jonathan Ralton
SIKM Boston
2024 SIKM Boston Retreat
3.
How do youemploy generative AI while
preserving human agency
and ensuring
ethical, reliable, and effective collaboration?
3 Case Studies • 3 Ideas • 3 AI Veterans
4.
Article Co-Authors
Eve
Porter-Zuckerman
Our learnings
fromAI
Article messaging,
style
osf.io/atfyz
Article Co-Authors
Katrina
Pugh
Our agency
with AI
Article research,
AI management
tools
Jonathan
Ralton
Subtle historical
features; “what
good looks like”
Case Study:
Agency
‘Nostalgia’
Marc
Solomon
Novel
interpretations,
inconsistencies
Case Study:
Discernment
‘Selectivity’
Andrew
Trickett
Social context,
tacit knowledge
Case Study:
Social Relations
‘Collectivity’
5.
Agenda
1. Why KM+ AI
2. Research & experience from SIKM Boston
a. Discernment / ‘Selectivity’
b. Agency / ‘Nostalgia’
c. Social Relations / ‘Collectivity’
3. Putting the ideas to work
6.
Why KM ‘collabs’with AI
● Discernment
accuracy, consistency, transparency, scrutiny
(‘Selectivity’ case study)
● Agency
autonomy, integrity of the self, machine-human collaboration
(‘Nostalgia’ case study)
● Social Relations
Sounding board, co-creation, social capital/network growth
(‘Collectivity’ case study)
1
7.
ESG Benchmarking at
TheHartford,
a Global Insurance Company
AI Discernment
‘Selectivity’ Case Study
2a
8.
The Collaborative Roleof AI in
Sustainability Reporting
1/5
2a
The goal is to expose, compare sustainability:
1. Select peer ESG program disclosures
2. Normalize to industry data standards
3. Report rapidly, consistently, accurately
What AI does:
● Structures from and to “human” narrative
● Credibly, consistently benchmarks (KPIs)
AURA
AI for Unified Reporting & Alignment
NOVA
Narrative Outcome Verification Assurance
Exemplar Quality Training&
Headcount Optimization at a
Global Technical Consultancy
AI Agency
‘Nostalgia’ Case Study
2b
14.
2b
LLMs tend tobe tuned to pull
back the most similar, most
recent, most likely signals…
KM'ers bridge Al's inherent
compartmentalization and
short-term memory...
Thesis
1/10
2b Problem/Desire
● Manyversions of ‘templates’ for the same type of deliverable
● Disparate criteria across teams for ensuring deliverable quality
● Less-than-desirable frequency of inspection of draft material
● Desire to measure trends over time
● Desire to increase evaluation capacity w/headcount restrictions
3/10
17.
2b Aspiration/Opportunity
● Gradedeliverable quality, over and over again, before finalization
○ e.g.: requirements documents, architectural specifications,
testing scenarios…
● Assess & render a ‘score’
○ i.e.: (A, B, C, D, F)
● Give feedback about why grade was assigned & improvement
suggestions
● Surface additional corroborating positive or negative feature
signals 4/10
18.
2b AI/LLM Training
AI/LLMtraining requires:
1. broad environment scan to discern which historical
examples meet offering standards sets and individual client
solution criteria best (‘exemplars’)
2. curation of anti-exemplars
5/10
19.
2b AI/LLM Training
AI/LLMtraining requires:
3. ongoing grading and benchmarking of recent historical
work for new features
4. retiring of any devalued features
6/10
Upstream
When AI istrained with enough exemplars (artifacts selected
for specific parameters), and that training occurs on a
continuous, nostalgic basis, AI assessment results are more
accurate and comprehensible.
Takeaways
2b
8/10
22.
Downstream
Trained Al ecosystemssuch as these shift knowledge-holders'
time from
searching and re-validating
to
problem-solving, diagnosing, and advising.
Takeaways
2b
9/10
23.
Humans do quality…
AIdoes scale.
Nostalgically curate and arm the AI…
Get (potentially) infinite transactional benefit.
Takeaways
2b
10/10
24.
Collaborative Exploration ata
Global Architecture/Engineering/
Construction Corporation
AI Social Relations
‘Collectivity’ Case Study
2c
25.
Co-
Curation
Lessons
Learned within
a CoPintranet
site
ChatGPT
Use
Use of codified
knowledge in AI
Group
Review
Reviewed by
SMEs for
correctness
Prompt
Revision
Library of
reusable prompts
for better queries
2c
Results from ‘kicking the tires’
together that benefit all
1/2
AI Fluency
Relationships
Transactive
Knowledge
Social Capital Trust
26.
Takeaways
● Increased senseof belonging
● People saw themselves as
co-learners
● Usage encouraged due to
reputational standing and trust
● Humans and humans, humans and
AI working together
2c
2/2
27.
This is jobsecurity for KM-ers!
3
Discernment:
Selectivity
Agency:
Nostalgia
Social Relations:
Collectivity
Use AI
to…
● Normalize/tabulate
● Calculate
● Compare
● Report
● Grade WIP against
exemplars
● Provide quality
feedback
● Run agents
● Standardize results
● Show/propose help
for gaps in corpus
Work with
KM teams
to…
● ID credible sources
● Curate master data
● Scrutinize results
● Frame decisions
● Bring tacit
knowledge
● Value/grade/cycle
exemplars
● Co-rate outputs
● Scrutinize results
● Frame decisions
● Co-curate
28.
What am I
becoming?
Stateyour
goal:
(efficiency,
innovation,
growth?)
Discernment
Agency Social Relations
Sufficiently fast &
accurate, secure?
How do we trust each other,
uphold credibility?
Commit to
learn
topic,
process
It’s a “one off”
Minimal human
adjustments AI
AI + more agency
(validate, trace
provenance, make
transparent)
Co-create on top of AI
Build social capital
Invest in trust building
Invest in (co-)credibility
3
Decision Tree: How to preserve
quality and our (co-)agency?
Katrina (Kate) Pugh,Ph.D. is a consultant, researcher and educator on AI, collaboration and sustainability. Since 2011 Kate
has taught at Columbia IKNS. As President of AlignConsulting, she helps build purposeful, productive conversation capacity
among teams and networks, and has used GenAI and data science to quantify the impact of conversation on sustainability
outcomes. She held executive KM roles with Fidelity, Intel, and JPMorganChase. In 2009 Kate co-founded the SIKM Boston
community of practice that is mentioned in the Collectivity, Nostalgia, Selectivity article. Kate earned a PhD from UMaine
(Ecology and Environmental Science), SM/MBA from MIT, and a BA in Economics from Williams College.
Marc Solomon is a ESG Reporting Automation Manager at a large U.S. insurer. In 2019 he authored Searching Out Loud , an
information literacy textbook for journalists and legal professionals. He has also taught in Boston University’s Professional
Investigation program. Shortly after 2009 Marc was an early member of the SIKM Boston community of practice that is
mentioned in the Collectivity, Nostalgia, Selectivity article. Marc is a graduate of Hampshire College and George Washington
University’s Master’s in Political Management programs
Jonathan Ralton crafts quality frameworks and mature, KM-based continuous improvement processes. A certified technical and
change leader, he engages with stakeholders to overcome nuanced content and KM hurdles through agile methodologies,
information architecture principles, and a product strategy approach. Augmenting a well-developed technical acumen,
Jonathan also possesses a flair for the creative and passion for good UX. He is also a decade+ SIKM Boston member and
Collectivity, Nostalgia, Selectivity article co-author. With a BS in Information Technology from Northeastern University, he is
currently pursuing an MBA through Isenberg School of Management at the University of Massachusetts Amherst.
About Us
32.
Human conversation improvesAI,
and AI improves conversation
IDEAS (5 Discussion Disciplines)
● Inquire
● Declare
● Ennoble
● Acknowledge
● Summarize
AI (LLM) has been trained
to detect 5DDs and shares
of 5DDs correlate with
innovation, relationship-
building, motivation.
Pugh, K, and Altmann, N. (2024), A Conversation Tool for Civility, and knowledge-integration. KM for Development Journal
https://www.km4djournal.org/index.php/km4dj/article/view/561 ; Pugh, K., Musavi, M., Johnson, T., Burke, C., Yoeli, E., Currie, E., and Pugh, B. (2023),
Neural nets for sustainability conversations: modeling discussion disciplines and their impacts. Neural Computing and Applications.
https://doi.org/10.1007/s00521-023-08819-z ,
33.
AI Brings Risks
“Becausethe temptation to outsource our creative work to AI is strong and growing stronger,
it is imperative that we attend to the social value of creativity. Otherwise, we are in danger of
developing a relationship with AI that leaves us much less connected to each other.”
Brainard, L. (2024) AI, Creativity and the Precarity of Human Connection. Forthcoming: Oxford
Intersections: AI in Society. https://philarchive.org/archive/BRAAIC
17% reduction in individual performance by high school students using AI (Bastani, H.,
Bastani, O., Sungu, A., Ge, H., Kabakcı, O., Mariman, R (2024). Generative AI Can Harm Learning
(July 15, 2024). The Wharton School Research Paper. http://dx.doi.org/10.2139/ssrn.4895486)
“o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate
in-context scheming capabilities. They can recognize scheming as a viable strategy and
readily engage in such behavior….models strategically introduce subtle mistakes into their
responses, attempt to disable their oversight mechanisms, and even exfiltrate what they
believe to be their model weights to external servers.” (Meinke et al. Apollo Research, Jan. 14,
2025. https://arxiv.org/abs/2412.04984)
Agency
Discernment
Social
Relations
34.
AI’s pros andcons for
digital workplaces
● Comprehensiveness; speed; reach; technical
acumen
● “Human-like” language
● Errors / omissions / misinformation / bias
● “Precarity of human relations” (micro)
● Risk to perspective-taking (macro) / Treating
people who oppose as “non-people”
● Artificial General Intelligence (AGI)
Pros:
Cons:
35.
AI
(nearly)
tops
humans
Source:
Nestor Maslej, LoredanaFattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy Katrina Ligett, Terah
Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index
Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
https://hai.stanford.edu/ai-index/2024-ai-index-report (Figure 2.1.16, p. 81)
36.
AI gets
fairer
(lighter
shading is
fairer)
Source:
NestorMaslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah
Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index
Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.
https://hai.stanford.edu/ai-index/2024-ai-index-report Figure 3.4.12 , p. 196.