Request LLM API key
Experimental OpenAI-compatible local LLM inference instances of NHR@FAU
Terms of use:
- This experimental service is provided as-is for evaluation purpose and research only and may fail or be stopped at any time without prior notice.
- The models provided are subject to change at any time at NHR@FAU’s sole discretion.
- NHR@FAU may introduce rate limiting as needed once too many requests come in.
- There is no entitlement to an API key. NHR@FAU may refuse or revoke API keys at any time at its sole discretion.
- Usage per API key is accounted (i.e. the number of tokens transferred), but queries&answers are never stored.
- Using the local LLM inference instances of NHR@FAU, you agree to provide a short report on its usage upon request.
Usage examples
Once you got your personal API key, you can use the LLM inference instances as follows (assuming you make the API key available to the code by setting the environment variable LLMAPI_KEY):
Show available models with curl:
curl -s -H "Authorization: Bearer $LLMAPI_KEY" \ https://hub.nhr.fau.de/api/llmgw/v1/models | jq .
Simple chat using the OpenAI python module:
from openai import OpenAI
import os
# Initialize client with private endpoint URL and API key
client = OpenAI(
# This is the default and can be omitted
api_key=os.getenv("LLMAPI_KEY"),
base_url="https://hub.nhr.fau.de/api/llmgw/v1"
)
# Create a chat completion request
response = client.chat.completions.create(
model="gpt-oss-120b", # Replace with a model name available to you!
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Help me using the OpenAI API! Keep in mind that I have to use https://hub.nhr.fau.de/api/llmgw/v1 as base URL for the API and consider that OpenAI changed its API in version 1.0.0. Simple curl command lines are also welcome."}
],
temperature=0.7 # Optional parameter
)
# Print the response
print(response.choices[0].message.content)
