OpenAI
The @gensx/openai package provides OpenAI API compatible components for GenSX.
Installation
To install the package, run the following command:
npm install @gensx/openai
Then import the components you need from the package:
import { OpenAIProvider, GSXChatCompletion } from "@gensx/openai";
Supported components
Component | Description |
---|---|
OpenAIProvider | OpenAI Provider that handles configuration and authentication for child components |
GSXChatCompletion | Enhanced component with enhanced features for OpenAI chat completions |
ChatCompletion | Simplified component for chat completions with streamlined output interface |
OpenAIChatCompletion | Low-level component that directly matches the OpenAI SDK interface for the Chat Completions API |
OpenAIResponses | Low-level component that directly matches the OpenAI SDK interface for the Responses API |
OpenAIEmbedding | Low-level component that directly matches the OpenAI SDK interface for the Embeddings API |
Component Comparison
The package provides three different chat completion components to suit different use cases:
- OpenAIChatCompletion: Direct mapping to the OpenAI API with identical inputs and outputs
- GSXChatCompletion: Enhanced component with additional features like structured output and automated tool calling
- ChatCompletion: Simplified interface that returns string responses or simple streams while maintaining identical inputs to the OpenAI API
Reference
<OpenAIProvider/>
The OpenAIProvider
component initializes and provides an OpenAI client instance to all child components. Any components that use OpenAI’s API need to be wrapped in an OpenAIProvider
.
<OpenAIProvider
apiKey="your-api-key" // Your OpenAI API key
organization="org-id" // Optional: Your OpenAI organization ID
baseURL="https://api.openai.com/v1" // Optional: API base URL
/>
By configuring the baseURL, you can also use the OpenAIProvider
with other OpenAI compatible APIs like x.AI and Groq .
<OpenAIProvider
apiKey="your-api-key" // Your Groq API key
baseURL="https://api.groq.com/openai/v1"
/>
Props
The OpenAIProvider
accepts all configuration options from the OpenAI Node.js client library including:
apiKey
(required): Your OpenAI API keyorganization
: Optional organization IDbaseURL
: Optional API base URL
<GSXChatCompletion/>
The GSXChatCompletion
component is an advanced chat completion component that provides enhanced features beyond the standard OpenAI API. It supports structured output, tool calling, and streaming, with automatic handling of tool execution.
To get a structured output, pass a Zod schema to the outputSchema
prop.
// Returns an object matching the outputSchema when executed
<GSXChatCompletion
model="gpt-4o"
messages={[
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Extract the name and age from: John Doe, 32 years old",
},
]}
outputSchema={z.object({
name: z.string(),
age: z.number(),
})}
/>
To use tools, create a GSXTool
object:
const calculator = GSXTool.create({
name: "calculator",
description: "Perform mathematical calculations",
schema: z.object({
expression: z.string(),
}),
run: async ({ expression }) => {
return { result: eval(expression) };
},
});
Then pass the tool to the tools
prop.
<GSXChatCompletion
model="gpt-4o"
messages={[{ role: "user", content: "What is 123 * 456?" }]}
tools={[calculator]}
/>
Props
The GSXChatCompletion
component accepts all parameters from OpenAI’s chat completion API plus additional options:
model
(required): ID of the model to use (e.g.,"gpt-4o"
,"gpt-4o-mini"
)messages
(required): Array of messages in the conversationstream
: Whether to stream the response (whentrue
, returns aStream<ChatCompletionChunk>
)tools
: Array ofGSXTool
instances for function callingoutputSchema
: Zod schema for structured output (when provided, returns data matching the schema)structuredOutputStrategy
: Strategy to use for structured output. Supported values aredefault
,tools
, andresponse_format
.- Plus all standard OpenAI chat completion parameters (temperature, maxTokens, etc.)
Return Types
The return type of GSXChatCompletion
depends on the props:
- With
stream: true
: ReturnsStream<ChatCompletionChunk>
from OpenAI SDK - With
outputSchema
: Returns data matching the provided Zod schema - Default: Returns
GSXChatCompletionResult
(OpenAI response with message history)
<ChatCompletion/>
The ChatCompletion
component provides a simplified interface for chat completions. It returns either a string or a simple stream of string tokens while having identical inputs to the OpenAI API.
// Returns a string when executed
<ChatCompletion
model="gpt-4o"
messages={[
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What's a programmable tree?" },
]}
temperature={0.7}
/>
// Returns an AsyncIterableIterator<string> when executed
<ChatCompletion
model="gpt-4o"
messages={[
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What's a programmable tree?" },
]}
temperature={0.7}
stream={true}
/>
Props
The ChatCompletion
component accepts all parameters from OpenAI’s chat completion API:
model
(required): ID of the model to use (e.g.,"gpt-4o"
,"gpt-4o-mini"
)messages
(required): Array of messages in the conversationtemperature
: Sampling temperature (0-2)stream
: Whether to stream the responsemaxTokens
: Maximum number of tokens to generateresponseFormat
: Format of the response (example:{ "type": "json_object" }
)tools
: Array ofGSXTool
instances for function calling
Return Types
- With
stream: false
(default): Returns a string containing the model’s response - With
stream: true
: Returns anAsyncIterableIterator<string>
that yields tokens as they’re generated
<OpenAIChatCompletion/>
The OpenAIChatCompletion
component is a low-level component that directly maps to the OpenAI SDK. It has identical inputs and outputs to the OpenAI API, making it suitable for advanced use cases where you need full control.
<OpenAIChatCompletion
model="gpt-4o"
messages={[
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What's a programmable tree?" },
]}
temperature={0.7}
/>
Props
The OpenAIChatCompletion
component accepts all parameters from the OpenAI SDK’s chat.completions.create
method:
model
(required): ID of the model to usemessages
(required): Array of messages in the conversationtemperature
: Sampling temperaturestream
: Whether to stream the responsemaxTokens
: Maximum number of tokens to generatetools
: Array of OpenAI tool definitions for function calling- Plus all other OpenAI chat completion parameters
Return Types
- With
stream: false
(default): Returns the fullChatCompletionOutput
object from OpenAI SDK - With
stream: true
: Returns aStream<ChatCompletionChunk>
from OpenAI SDK
<OpenAIResponses/>
The OpenAIResponses
component is a low-level component that directly maps to the OpenAI Responses API . It has identical inputs and outputs to the OpenAI Responses API, making it suitable for advanced use cases where you need full control.
<OpenAIResponses
model="gpt-4o-mini"
input={[
{
role: "user",
content: "What is JSX?",
},
]}
truncation="auto"
/>
Props
The OpenAIResponses
component accepts all parameters from the OpenAI SDK’s responses.create
method:
model
(required): ID of the model to useinput
(required): The input to the response- Plus all other optional parameters from the OpenAI Responses API
Return Types
- With
stream: false
(default): Returns the fullResponse
object from OpenAI SDK - With
stream: true
: Returns aStream<ResponseStreamEvent>
from OpenAI SDK
<OpenAIEmbedding/>
The OpenAIEmbedding
component is a low-level component that directly maps to the OpenAI SDK’s embeddings.create
method. It has identical inputs and outputs to the OpenAI Embeddings API, making it suitable for advanced use cases where you need full control.
<OpenAIEmbedding model="text-embedding-3-small" input="Hello, world!" />
Props
The OpenAIEmbedding
component accepts all parameters from the OpenAI SDK’s embeddings.create
method:
model
(required): ID of the model to useinput
(required): The input to the embedding (string
orstring[]
)- Plus all other optional parameters from the OpenAI Embeddings API
Return Types
- Returns the full
CreateEmbeddingResponse
object from OpenAI SDK