top of page
  • Xing

Workshop Offline LLMs

As a workshop in the company or online, also at our location in Wiesbaden

 

For anyone interested in artificial intelligence who wants to understand and practically apply the possibilities of offline large language models (LLMs). This workshop demonstrates how to use LLMs locally—independently of cloud providers—and provides a practical overview of typical use cases in the enterprise.

Workshop Offline LLMs.jpg
SPSS, Python, R

LEARNING OBJECTIVES AND AGENDA

Goals:

  • Understanding how offline LLMs work

  • Learn the differences to cloud-based LLMs

  • Installation and setup of local LLMs (Pollama, LM Studio, GPT4All)

  • Using Python to control and integrate LLMs

  • Connecting offline LLMs with tools, e.g. databases, search APIs, automation

  • Identify opportunities and limitations for practical use in the company

OPEN TRAINING

In-person event in Wiesbaden

or online seminar

€1,090.00

per person, plus statutory VAT

In- person events will take place in Wiesbaden and will be held with a minimum of two registrations (offer guarantee)

IN-HOUSE SEMINAR

Seminars held at the customer's location

€1,390.00

per day up to 4 participants plus statutory VAT

All content of the in-house seminars is individually tailored and taught to specific target groups .


Intensive follow-up support enables participants to implement their knowledge in the shortest possible time.

Recommended seminar duration: 2 days

Rental fees for training notebook (on request): 60,- Euro (per day, per training computer)

WORKSHOP

You tell us your topics!

Price on request

plus statutory VAT and travel expenses if applicable

All workshop content is individually tailored and taught to specific target groups .

We are happy to conduct the workshop at your location, in Wiesbaden or online.

Rental fees for training notebook (on request): 60,- Euro (per day, per training computer)

Contents of the workshop

  • Understanding how offline LLMs work

  • Learn about the differences to cloud-based LLMs: Quantization: 8-bit, 4-bit, model size.

  • Installation and setup of local LLMs with ollama etc., hardware requirements, speed notes.

  • Using Python to control and integrate LLMs.

  • Connecting offline LLMs with tools, especially interfaces to search APIs and databases.

  • Evaluating results with a structured approach: Classic metrics as KPIs and the G-Eval approach.

Notes on implementation

There are now excellent interfaces that provide external information, including tools like tavily, which provides Google search results. This allows offline models to be smaller and run on consumer hardware at reasonable speeds. The advantage: no sensitive information is disclosed.

  • This is a completely practical workshop: practical work is always carried out in all parts.

  • We use different hardware and can therefore demonstrate how quickly each approach works.

  • Everything is implemented in Python.

  • We build interfaces, especially to search engines, allowing us to achieve performance similar to cloud-based tools.

CONTENTS

This workshop is aimed at anyone who wants to understand the functionality of offline Large Language Models (LLMs) and put them into practice. It focuses on practical scenarios in which LLMs are run locally and controlled with Python. Participants will learn how to integrate offline models into business processes – independent of cloud providers and with full control over the data.

First, the differences between offline and cloud-based LLMs are presented. This includes quantization (8-bit, 4-bit), model sizes, hardware requirements , and performance aspects. This is followed by practical installation and setup of local LLMs (e.g., with ollama) and integration via Python.

Another focus is on connecting offline LLMs with tools , especially interfaces to search APIs and databases. Different hardware setups are presented, making it clear under which conditions each approach is appropriate.

The workshop concludes with an evaluation of the results using a structured approach : classic metrics (KPIs) are supplemented by modern methods such as the G-Eval approach.

IMPLEMENTATION INSTRUCTIONS

  • The workshop is entirely practical: all parts involve hands-on work.

  • Various hardware scenarios are used to realistically estimate speeds and limits.

  • All exercises are implemented in Python .

  • We build interfaces to search engines and show how they can achieve similar performance to cloud-based tools.

  • A clear advantage: No sensitive data leaves the company, as only local models are used.

bottom of page