Beyond ChatGPT
What You Need To Know About AI for CS
February 14, 2025 | Co-led by Mariena Quintanilla and Jan Young
Beyond ChatGPT
Let's explore how AI (LLMs) can do much more than just chat.
ChatGPT is the most basic use of LLMs
Generating text based on studying large amounts of data is great, but it scratches the surface for how LLMs can be used to transform the way we work.
LLM Apps: Advanced Architectures
Retrieval-Augmented Generation (RAG)
Query external knowledge bases
Limitation addressed:
  • Information gaps
  • Hallucinations
Text-to-SQL and Data Q&A
Turn natural language into SQL, LLM-aided data analysis and report generation through code generation
Limitation addressed:
  • Information gaps
AI Automation
Extend automation with steps to dynamically generate language as well as evaluate language.
Limitation addressed:
  • Reactive nature
Agentic Workflows
Processes that can perform complex tasks, breaking them down into sub-tasks, evaluating and correcting its performance.
Limitation addressed:
  • Passive nature
LLM Tech: RAG
Responses are based in full or part, on knowledge it retrieves.
Similar to search, results are queried then ranked. LLM is used to summarize. Requires a clean reliable knowledge base to be useful.
Multiple version of files or information will cause issues.
Scrappy: Static KB
Conversational AI with access to files you upload directly. Changes to the files are not automatically captured.
Examples
  • ChatGPT Custom GPTs
  • Claude Projects
Robust: Enterprise KB
Conversational AI with access to your internal knowledge in Confluence, Dropbox, etc.
Examples
  • Glean
  • Tettra
  • Coveo
  • Microsoft Copilot
LLM Tech: Text-to-SQL and Data Q&A
Scrappy: ChatGPT Code Interpreter
Upload report(s) and use code interpreter to analyze, join or compare datasets. Fairly reliable but requires validation of the approach so some technical skills are required.
Robust: AI-features in Enterprise Tools
Natural language is mapped into SQL queries to return data and answer questions.
This technology is immature and can only solve simple requests (close to the data model).
Examples
  • Salesforce Einstein
  • Hubspot Breeze
LLM Tech: AI Automation
LLMs can be used in automation workflow for a variety of tasks from language generation, to evaluating or transforming text, and extract important information.

Examples

Generation Write an email to a customer when usage changes Evaluation (reading an incoming email) Assess if the customer is blocked or is reporting an urgent issue. Transformation Transform AI meeting notes into a post-meeting email to send to the customer Extraction Parse new contract details and input into Salesforce Enrichment Review interactions for the month and assign customer sentiment

Scrappy: Leverage IFTTT
Old classics for automation and connecting systems can now be used to dynamically write emails and evaluate replies.
Examples
  • Zapier or Make
  • ChatGPT Tasks and Operator
Robust: AI-Native Applications
Custom solutions and enterprise frameworks that use LLMs are expected to revolution traditional BPA products like Mulesoft and PowerAutomate
LLM Tech: Agentic AI
Agentic workflows vary in level of complexity and autonomy.
  • Tasks: ability to browse the web, read or update an internal system, write code, send emails
  • Planning: Determines how to break down a request and identify the tasks needed to complete the request
  • Verification: An ability to validate success and resolve issues
Scrappy
Examples
  • Google DeepResearch
  • OpenAI Operator
Robust
Enterprise Agents built in-house using platforms like AutoGen or Enterprise solutions, often functional. Technology is still very early!
Breakouts
Which of these can you see being most impactful to your organization?
If you were going to setup an AI agent as your personal assistant what would you want it to do?
About Mariena
Mariena Quintanilla
AI educator and consultant
I’ve been helping organizations use Data and AI for 10+ years.
I love to hike in mossy forests and work on DIY home projects.
Mellonhead
Founded in January 2024
Providing AI Consulting and Education Services
A mission-driven company focused on democratizing tech and up-skilling regular folks through AI solutions and hands-on workshops.
Want to go farther with AI?
Appendix
How Large Language Models (LLMs) Work

Training: How LLMs Learn Language

Learning Patterns Through Probability: LLMs analyze massive datasets to understand the likelihood of word sequences based on how often they appear together. Distributions of Words: LLMs analyze word distributions to comprehend language patterns and relationships between words, and capture nuances like tone and style. Conditional Probability: LLMs adjust probabilities based on context, refining predictions to fit the situation. Learn More: Generative AI exists because of the transformer​ An Intuitive Guide to How LLMs Work

Training: The Data

Public forums and websites Open-source code 70,000+ books and research papers Data on the internet that requires authentication is not included. That includes paid subscriptions to news articles and private google documents and records

Inference: How LLMs Generate Text

Context Drives Coherence: LLMs consider the context provided by surrounding words Probabilities Predict: The model assigns probabilities to all possible next words based on previous words. Sampling for Variation: LLMs sample from the probability distribution, introducing variation and creativity.

Key Strengths of LLMs
Task Flexibility
LLMs can handle a wide range of tasks, from summarizing reports to answering questions, translating text, or drafting documents, adapting seamlessly to different use cases
Style Adaptation
LLMs can adjust their writing to match a specific tone, format, or audience, whether it’s crafting a formal financial analysis or an approachable educational guide.
Knowledge Aggregation
LLMs connecting information across sources to answer complex questions or simplify dense material.
Language Generation
LLMs produce human-like text, generating clear, coherent, and contextually appropriate responses in a variety of languages.
The Limitations of LLMs
By understanding the limitations of LLMs, and the risks associated to their use, you can evaluate their suitability for different tasks and recognize the importance of reviewing their outputs for accuracy and relevance. This is not an exhaustive list.
Hallucination
Models may generate false outputs framed in a high confidence
Traceability
Models are considered "black box" as the underlying logic in generating outputs is not traceable
Security threats
Increased threat of cyberattacks, deep fakes, impersonation, and fraud
Data confidentiality
Using public models can lead to leakage of proprietary data
Stale Public Knowledge
LLMs on their own don't have access to recent or confidential data, limiting Enterprise use
Reactive
Basic uses are reactive, chatbots are not initiating conversations or work. They don't initiate work.
Passive
Basic uses are as a guide, not themselves taking action or performing tasks. They don't "do" anything.