.
Openator is a state-of-the-art browser agent tool that is capable of planning and executing actions formulated in natural language.
This project is under active development and any help or support is welcome.
.
.
Install the package using npm or yarn.
npm i openator
Spin up your first agent with a task.
import { initOpenator, ChatOpenAI } from 'openator';
const main = async () => {
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
const openator = initOpenator({
llm,
headless: false,
});
await openator.start(
'https://amazon.com',
'Find a black wirelesskeyboard and return the price.',
);
};
main();
Optionally, you can add variables and secrets to your agent. These variables will be interpolated during runtime by the agent.
This is especially helpful if you want to pass more context to the agent, such as a username and a password.
import { initOpenator, Variable, ChatOpenAI } from 'openator';
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
const openator = initOpenator({
headless: false,
llm,
variables: [
new Variable({
name: 'username',
value: 'my username',
isSecret: false,
}),
new Variable({
name: 'password',
value: process.env.PASSWORD,
isSecret: true,
}),
],
});
await openator.start(
'https://my-website.com',
'Authenticate with the username {{username}} and password {{password}} and then find the latest news on the website.',
);
Optionally you can configure the LLM to use different models or configurations.
Here is an example of how to customize the ChatOpenAI provider (more providers will be added soon).
Here's the configuration type for the ChatOpenAI provider.
export type ChatOpenAIConfig = {
/**
* The model to use.
* @default gpt-4o
*/
model?: 'gpt-4o' | 'gpt-4o-mini' | 'gpt-4-turbo';
/**
* The temperature to use. We recommend setting this to 0 for consistency.
* @default 0
*/
temperature?: number;
/**
* The maximum number of retries.
* This is usefull when you have a low quota such as Tier 1 or 2.
* @default 6
*/
maxRetries?: number;
/**
* The maximum number of concurrent requests.
* Set it to a low value if you have a low quota such as Tier 1 or 2.
* @default 2
*/
maxConcurrency?: number;
/**
* The OpenAI API key to use
*/
apiKey: string;
};
import { ChatOpenAI } from 'openator';
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
temperature: 0,
maxRetries: 3,
maxConcurrency: 1,
});
Here is what you can build with Openator, you can find more examples and source code in our main repository. The frontend is not included but can be found in our open-source repository.
Example task:
await openator.start(
'https://amazon.com',
'Purchase a black wireless keyboard',
);
.
.