Skip to Content

OpenAI

The @gensx/openai package provides OpenAI API compatible components for GenSX.

Installation

To install the package, run the following command:

npm install @gensx/openai

Then import the components you need from the package:

import { OpenAIProvider, GSXChatCompletion } from "@gensx/openai";

Supported components

Component
Description
OpenAIProviderOpenAI Provider that handles configuration and authentication for child components
GSXChatCompletionEnhanced component with enhanced features for OpenAI chat completions
ChatCompletionSimplified component for chat completions with streamlined output interface
OpenAIChatCompletionLow-level component that directly matches the OpenAI SDK interface for the Chat Completions API
OpenAIResponsesLow-level component that directly matches the OpenAI SDK interface for the Responses API
OpenAIEmbeddingLow-level component that directly matches the OpenAI SDK interface for the Embeddings API

Component Comparison

The package provides three different chat completion components to suit different use cases:

  • OpenAIChatCompletion: Direct mapping to the OpenAI API with identical inputs and outputs
  • GSXChatCompletion: Enhanced component with additional features like structured output and automated tool calling
  • ChatCompletion: Simplified interface that returns string responses or simple streams while maintaining identical inputs to the OpenAI API

Reference

<OpenAIProvider/>

The OpenAIProvider component initializes and provides an OpenAI client instance to all child components. Any components that use OpenAI’s API need to be wrapped in an OpenAIProvider.

<OpenAIProvider apiKey="your-api-key" // Your OpenAI API key organization="org-id" // Optional: Your OpenAI organization ID baseURL="https://api.openai.com/v1" // Optional: API base URL />

By configuring the baseURL, you can also use the OpenAIProvider with other OpenAI compatible APIs like x.AI and Groq.

<OpenAIProvider apiKey="your-api-key" // Your Groq API key baseURL="https://api.groq.com/openai/v1" />
Props

The OpenAIProvider accepts all configuration options from the OpenAI Node.js client library including:

  • apiKey (required): Your OpenAI API key
  • organization: Optional organization ID
  • baseURL: Optional API base URL

<GSXChatCompletion/>

The GSXChatCompletion component is an advanced chat completion component that provides enhanced features beyond the standard OpenAI API. It supports structured output, tool calling, and streaming, with automatic handling of tool execution.

To get a structured output, pass a Zod schema to the outputSchema prop.

// Returns an object matching the outputSchema when executed <GSXChatCompletion model="gpt-4o" messages={[ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Extract the name and age from: John Doe, 32 years old", }, ]} outputSchema={z.object({ name: z.string(), age: z.number(), })} />

To use tools, create a GSXTool object:

const calculator = GSXTool.create({ name: "calculator", description: "Perform mathematical calculations", schema: z.object({ expression: z.string(), }), run: async ({ expression }) => { return { result: eval(expression) }; }, });

Then pass the tool to the tools prop.

<GSXChatCompletion model="gpt-4o" messages={[{ role: "user", content: "What is 123 * 456?" }]} tools={[calculator]} />
Props

The GSXChatCompletion component accepts all parameters from OpenAI’s chat completion API plus additional options:

  • model (required): ID of the model to use (e.g., "gpt-4o", "gpt-4o-mini")
  • messages (required): Array of messages in the conversation
  • stream: Whether to stream the response (when true, returns a Stream<ChatCompletionChunk>)
  • tools: Array of GSXTool instances for function calling
  • outputSchema: Zod schema for structured output (when provided, returns data matching the schema)
  • structuredOutputStrategy: Strategy to use for structured output. Supported values are default, tools, and response_format.
  • Plus all standard OpenAI chat completion parameters (temperature, maxTokens, etc.)
Return Types

The return type of GSXChatCompletion depends on the props:

  • With stream: true: Returns Stream<ChatCompletionChunk> from OpenAI SDK
  • With outputSchema: Returns data matching the provided Zod schema
  • Default: Returns GSXChatCompletionResult (OpenAI response with message history)

<ChatCompletion/>

The ChatCompletion component provides a simplified interface for chat completions. It returns either a string or a simple stream of string tokens while having identical inputs to the OpenAI API.

// Returns a string when executed <ChatCompletion model="gpt-4o" messages={[ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What's a programmable tree?" }, ]} temperature={0.7} /> // Returns an AsyncIterableIterator<string> when executed <ChatCompletion model="gpt-4o" messages={[ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What's a programmable tree?" }, ]} temperature={0.7} stream={true} />
Props

The ChatCompletion component accepts all parameters from OpenAI’s chat completion API:

  • model (required): ID of the model to use (e.g., "gpt-4o", "gpt-4o-mini")
  • messages (required): Array of messages in the conversation
  • temperature: Sampling temperature (0-2)
  • stream: Whether to stream the response
  • maxTokens: Maximum number of tokens to generate
  • responseFormat: Format of the response (example: { "type": "json_object" })
  • tools: Array of GSXTool instances for function calling
Return Types
  • With stream: false (default): Returns a string containing the model’s response
  • With stream: true: Returns an AsyncIterableIterator<string> that yields tokens as they’re generated

<OpenAIChatCompletion/>

The OpenAIChatCompletion component is a low-level component that directly maps to the OpenAI SDK. It has identical inputs and outputs to the OpenAI API, making it suitable for advanced use cases where you need full control.

<OpenAIChatCompletion model="gpt-4o" messages={[ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What's a programmable tree?" }, ]} temperature={0.7} />
Props

The OpenAIChatCompletion component accepts all parameters from the OpenAI SDK’s chat.completions.create method:

  • model (required): ID of the model to use
  • messages (required): Array of messages in the conversation
  • temperature: Sampling temperature
  • stream: Whether to stream the response
  • maxTokens: Maximum number of tokens to generate
  • tools: Array of OpenAI tool definitions for function calling
  • Plus all other OpenAI chat completion parameters
Return Types
  • With stream: false (default): Returns the full ChatCompletionOutput object from OpenAI SDK
  • With stream: true: Returns a Stream<ChatCompletionChunk> from OpenAI SDK

<OpenAIResponses/>

The OpenAIResponses component is a low-level component that directly maps to the OpenAI Responses API. It has identical inputs and outputs to the OpenAI Responses API, making it suitable for advanced use cases where you need full control.

<OpenAIResponses model="gpt-4o-mini" input={[ { role: "user", content: "What is JSX?", }, ]} truncation="auto" />
Props

The OpenAIResponses component accepts all parameters from the OpenAI SDK’s responses.create method:

  • model (required): ID of the model to use
  • input (required): The input to the response
  • Plus all other optional parameters from the OpenAI Responses API
Return Types
  • With stream: false (default): Returns the full Response object from OpenAI SDK
  • With stream: true: Returns a Stream<ResponseStreamEvent> from OpenAI SDK

<OpenAIEmbedding/>

The OpenAIEmbedding component is a low-level component that directly maps to the OpenAI SDK’s embeddings.create method. It has identical inputs and outputs to the OpenAI Embeddings API, making it suitable for advanced use cases where you need full control.

<OpenAIEmbedding model="text-embedding-3-small" input="Hello, world!" />
Props

The OpenAIEmbedding component accepts all parameters from the OpenAI SDK’s embeddings.create method:

  • model (required): ID of the model to use
  • input (required): The input to the embedding (string or string[])
  • Plus all other optional parameters from the OpenAI Embeddings API
Return Types
  • Returns the full CreateEmbeddingResponse object from OpenAI SDK
Last updated on