PROJECT NAME

GenAI Use Case Evaluation

SKILLS
Facilitation, User Interviews, User Research, Use Case Defnition
STAKEHOLDERS
General Dynamics Mission Systems
The Challenge:

With the growing popularity and high potential for positively impacting processes within the engineering organization, the leadership at General Dynamics Mission Systems tasked a small team with identifying, planning, and executing the implementation of Generative AI LLMs.

After implementing a rudimentary internally hosted chat tool, our team was inundated with suggestions from all lines of business, each proposing ways to integrate Generative AI within the organization. I was asked to help identify, categorize, and prioritize the various applications for Generative AI.

Categorizing:

I began by categorizing the list of use cases based on both business and user goals. To inform this process, I conducted a trade study to examine how the commercial sector was classifying tools developed using Large Language Models. I then conducted internal interviews with subject matter experts and key stakeholders. This research led to the development of four core parameters to define the use cases:

  1. Data Type: What type of data are we working with?
  2. User Goals: What does the user need the tool to do for them?
  3. Enterprise Function (What): What enterprise function does this tool support?
  4. Business Value (How): How does this tool provide value to the business?
Defining:

After defining and categorizing the concepts, stakeholder and SME interviews revealed gaps in understanding the importance and implementation difficulty of use cases. To address this, I developed a targeted interview script through additional SME and stakeholder discussions, enabling the team to prioritize use cases and create a roadmap. Below is the script used for 28 interviews with the original points of contact.

Generative AI Use Case Interview Script Page 1
Generative AI Use Case Interview Script Page 2
Prioritizing:

A facilitation event was held to review the data and prioritize use cases into six categories:

  1. Use cases that could be solved with an assistant (a custom Chat)
  2. Use cases that could be solved with a self-service Retrieval-Augmented Generation (RAG).
  3. Use cases that could be be solved with a custom developed solution.
  4. Use cases that could be solved with a custom-built RAG.
  5. Use cases that had higher sensitivity than CUI and required future planning.
  6. Use cases that were customer facing and required future planning.

Based on this categorization and prioritization, the team successfully addressed the majority of the identified use cases by developing an assistant tool with a self-service RAG option and continued implementing custom-developed Generative AI solutions.

Explore Projects