file: ./content/docs/changelog.mdx # undefined: What's new? Learn what the latest updates are in Upstreet-world. import Changelog from "../../../../CHANGELOG.md"; file: ./content/docs/(getting-started)/create-an-agent.mdx # undefined: Create an Agent Build AI Agents in minutes with the Upstreet Agents SDK. import Wrapper from '../../../components/preview/wrapper'; import { File, Folder, Files } from 'fumadocs-ui/components/files'; import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'; import { formatNodeVersion } from '../../../lib/utils'; import pkgJson from 'usdk/package.json'; * Ensure your Node version is {formatNodeVersion(pkgJson.engines?.node || '>=22.9.0').toLocaleLowerCase()}. [Install Node](/install#prerequisites) * Upstreet SDK installed on your computer. [Install SDK](/install) * Logged in to SDK. Follow instructions to [log in here](/install#log-into-the-sdk). Upstreet AI Agents are persistent digital entities that can autonomously handle tasks, interact with you and your users over chat or social media, and can be customized according to your configuration. **New to designing Agents?** Explore how to define effective Agent objectives and strategies in our [Agent Design Concepts](/concepts/defining-agent-objectives). **Coming in from another platform like Tavern?** You can import your previous work into an Upstreet Agent. Check out our [Migration Guides](/migration-guides). ## Quickstart: Creating an Agent Here’s how to set up your first Agent. ### Step 1: Running the command 1. **Set up your project directory**\ First, create an empty directory where you’d like to set up your Agent. 2. **Run the Command in your terminal** ```bash usdk create ``` Run the above command in your terminal within your new directory. This will launch a guided interview, where you’ll define essential properties of your Agent. You can see more options by running `usdk create --help`. We recommend using the [`pnpm`](https://pnpm.io/installation`) (Performant Node Package Manager) package manager with your Agent. ### Step 2: Complete the Agent Interview The `usdk create` command initiates an interactive “interview” process with the **Interviewer**. Here’s what to expect: 1. **Interactive Prompts**\ The SDK will prompt you with questions, helping you define your Agent's personality, environment, and other key settings. 2. **Simulated Chat with Your Agent**\ You’ll be able to “converse” with the **Interviewer**, defining the Agent's Homespace (its natural habitat) and personality traits through chat-based interactions. 3. **Completion**\ Once all required fields are captured, the interview concludes, setting up all the Agent features and initialising the necessary files in your directory. Want to **skip the interview** and jump right in with coding your agent? You can use the `-p` flag to pass a single creation prompt, or the `-y` to skip the interview process and create a default agent. You can also omit the agent directory. In that case, a directory will be created for you. To import your agent from other platforms, `usdk create` also supports [Tavern character cards](https://github.com/malfoyslastname/character-card-spec-v2) and more. See [*Importing a Tavern Agent*](/migration-guides/tavern#create-an-upstreet-agent-from-a-tavern-character-card) for more information. ### File Structure Assuming you’ve named your project directory “my-agent,” here’s the structure you’ll see post-setup: For a breakdown of these files, see our [Agent Structure guide](/concepts/usdk/agent-structure). Remember to keep your Agent's configuration secure. Avoid committing your secret keys to GitHub, and use [Environment Variables](/customize-your-agent#using-environment-variables) to store secrets. *** ## Editing your Agent If you wish to edit your already-created Agent **through the Interview process**, you may run the following command in the directory of your Agent: ```bash usdk edit ``` *** ## What's next? Now that you've set up your base Agent, you can choose to dive deeper into Agent customization and potential capabilities by [writing code in React](/customize-your-agent). If you wish to skip customization and directly launch your Agent on our Platform, you can check out [Testing your Agent](/test-your-agent) and [Deploying your Agent](/deploy-your-agent). file: ./content/docs/(getting-started)/customize-your-agent.mdx # undefined: Customize your Agent Learn how to extend an Agent's functionality using React. import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'; * Upstreet SDK installed on your computer. [Install SDK](/install) * Logged in to SDK. Follow instructions to [log in here](/install#log-into-the-sdk). * An Upstreet Agent. [Create an Agent](/create-an-agent) * Some preliminary knowledge of [React](https://react.dev) Learn how to extend an Agent’s functionality using [React](https://react.dev) components and intuitive primitives to build dynamic, intelligent agents with Upstreet. This guide will walk you through customizing your Agent step-by-step, with examples and tips to inspire you. ## Overview: How an Agent Works Agents follow a simple but powerful cycle: they **perceive** inputs, **process** them to generate insights or decisions, and **act** on their environment based on these decisions. This perception-think-act model is broken down into components that work together to define your Agent’s functionality. Read our [*What Are Agents?*](/concepts/what-are-agents) guide to get familiar with Agent basics. Explore [Upstreet’s Agent Architecture](/concepts/usdk/architecture) to learn more about how Agents operate behind the scenes. ## Agent Structure at a Glance If you’re familiar with [React](https://react.dev), the structure of an Agent will look familiar. Here’s a basic example to get you started: ```tsx const MyAgent = () => ( This assistant is developed by MultiCortical Example Technologies. For support, refer users to our help pages at https://multicortical.example.com/ ); export default MyAgent; ``` The example above shows the basic setup, where a simple prompt is added to guide the Agent’s interactions. The `react-agents` library, however, allows much more flexibility through four core components. ## Key Components of an Agent Using the `react-agents` library, Agent customization is broken down into four core components: | Component | Purpose | | -------------- | -------------------------------------------------------------- | | `` | Sets initial instructions and context for the Agent’s behavior. | | `` | Defines actions the Agent can perform, utilizing LLM tool-calling facilities. | | `` | Allows the Agent to perceive and react to real-world events. | | `` | Schedules tasks for the Agent, enabling it to manage asynchronous processes. | You can see all Components [here](/api/components). Import these components from the SDK package: ```tsx import { Prompt, Action, Perception, Task } from 'react-agents'; ``` Each component adds specific functionality, enabling you to build intelligent, responsive Agents. Let’s dive into each one in more detail. An Agent consists of a collection of components implementing these primitives. ## Bringing It All Together Here’s a sample setup that combines these components: ```tsx title="agent.tsx" const MyAgent = () => ( I'm here to help you with information about our services and products. Retrieve the latest product details based on user query. Remind user of inactivity after 5 minutes. Send daily summary report. ); ``` This setup provides a well-rounded Agent equipped to respond, perceive, act, and manage scheduled tasks effectively. ### Using environment variables You may choose to use third-party services when customizing your Agent. These services may have **secrets** or **tokens**. You can keep them safe by creating a `.env.txt` file in the your Agent directory, and keeping environment variables there. An example might look like: ```dotenv title=".env.txt" GOOGLE_API_KEY=abcdefghijklmonp OPEN_WEATHER_KEY=abcdefghijklmonp ``` You can access these environment variables by using the [`useEnv`](/api/hooks/use-env) hook. ## Custom Logic and Advanced Patterns With `react-agents`, you can introduce custom logic within components, such as conditional rendering, state management, and data manipulation, using native React hooks and patterns. To explore advanced customization: * Learn how to make [Custom Components](/advanced/custom-components/with-use-state). * Learn more about how [agents are structured](/concepts/usdk/agent-structure). * Our [Pokédex example](/examples/informative-agent) demonstrates a basic, real-world example of how to create a custom component. *** ## Ready for More? With these foundational components, you can customize your Agent to fit various tasks, from customer support to data processing. Next steps: * **[Test your Agent](/test-your-agent)**: See how it responds and make adjustments. * **[Deploy your Agent](/deploy-your-agent)**: Launch your Agent on Upstreet's platform for real-world interactions. file: ./content/docs/(getting-started)/deploy-your-agent.mdx # undefined: Deploy your Agent Get your Agent live and see it used across the Internet. import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'; * Upstreet SDK installed on your computer. [Install SDK](/install) * Logged in to SDK. Follow instructions to [log in here](/install#log-into-the-sdk). * A configured Upstreet Agent. [Create an Agent](/create-an-agent) * Sufficient credits on Upstreet's Platform ## Deploy to Upstreet Ready to unleash your Agent into the world? Simply run: ```bash usdk deploy ``` This command uploads your Agent to the Upstreet platform, making it accessible via a unique URL provided after a successful deployment. Deployment consumes Upstreet credits. For more information, see our [pricing section](/pricing). ## Self-hosting Prefer to run your Agent locally? You can use a Cloudflare tunnel to make it internet-accessible. Note that even with self-hosting, using Upstreet’s AI services may still consume credits. For more on pricing models for both deployment options, visit our [Pricing](/pricing) page. *** ## Unpublishing an Agent You may want to undeploy or unpublish your Agent. To do so, simply run the following command in the Agent's directory: ```bash usdk unpublish ``` This will pull your Agent off the Upstreet infrastructure and make the Agent unavailable on the Platform. To run again, simply [redeploy](/deploy-your-agent#deploy-to-upstreet). *** file: ./content/docs/(getting-started)/index.mdx # undefined: Upstreet Documentation Learn how to build and deploy AI Agents. file: ./content/docs/(getting-started)/install.mdx # undefined: Setup the SDK Get started with the Upstreet Agents SDK and begin building AI Agents today. import Wrapper from '../../../components/preview/wrapper'; import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'; import { Pre, CodeBlock } from 'fumadocs-ui/components/codeblock'; import { Tab, Tabs } from 'fumadocs-ui/components/tabs'; import pkgJson from 'usdk/package.json'; import { formatNodeVersion } from '../../../lib/utils'; To start building with Upstreet AI Agents, first set up the Upstreet Agents SDK (`usdk`). If you're new to Agents, check out *[What are Agents?](/concepts/what-are-agents)* for a quick introduction. For more on SDK architecture and capabilities, see our [Concepts guide](/concepts/usdk/architecture). ## Prerequisites Before getting started, make sure you have the following: * **macOS**: Use the **Terminal** app, located in Applications > Utilities. * **Linux**: Use the **Terminal** application, usually accessible from the applications menu. * **Windows**: Use the **Command Prompt**. You can find the Command Prompt by hitting the Windows key (⊞ Win) and searching for the "Command Prompt" application. **Important:** We **do not** recommend using PowerShell or ⊞ Win+R on **Windows** as it often has issues with NodeJS. If you **must** use PowerShell, you can try out [this possible solution.](/errors#running-usdk-in-powershell) =22.9.0')} (required).`}> Node.js is the runtime environment that allows you to run JavaScript code outside a web browser. ### Install Node.js * Go to the [Node.js downloads page](https://nodejs.org/en/download/package-manager/current). * Follow the instructions to install it as a package on your system, or download the installer for your OS and follow the setup steps. * Open the Terminal from step (1), and run `node -v` to see your Node version. **NPM (Node Package Manager)** is a tool that comes installed alongside Node.js. It is used to manage packages (libraries, tools, frameworks, etc.) for JavaScript projects. With NPM, you can: * **Install packages** for your project locally or globally. * **Share your code** as packages with other developers. * **Run scripts** for development tasks like building or testing your project. NPM is bundled with Node.js because it's the default package manager for managing JavaScript dependencies in Node.js projects. When you install Node.js, NPM is automatically installed, so you don't need to install it separately in most cases. You can **verify the installation of `npm`**: * Open a terminal or command prompt. * Run `node -v` to check the installed Node.js version. * Run `npm -v` to check the installed NPM version. *** Once you've installed NodeJS, you'll have access to both `node` and `npm` commands, enabling you to download and run JavaScript applications like `usdk`. Ensure the Node version is {formatNodeVersion(pkgJson.engines?.node || '>=22.9.0').toLocaleLowerCase()}, else `usdk` will **not** work. ## Install from NPM To install the `usdk` command-line tool, use the following command: {/* "pnpm (recommended)", */} {/* ```bash tab="pnpm (recommended)" pnpm install -g usdk ``` */} ```bash tab="npm" npm install -g usdk ``` ### Verify Installation (optional) To confirm a successful installation, check your SDK version:
```bash title="Terminal" usdk version ```
          {pkgJson.version}
        
This command should return the installed version number. You can view all available versions of `usdk` on [NPM](https://www.npmjs.com/package/usdk?activeTab=versions). **Tip:** While you can use `npx usdk@latest ` to run `usdk` directly, we recommend specifying a fixed version for consistency. ## Log into the SDK Some SDK features require you to be logged in. To log in: 1. Run the command: ```bash usdk login ``` 2. A browser will open. Log into Upstreet with your preferred authentication provider. file: ./content/docs/(getting-started)/test-your-agent.mdx # undefined: Test your Agent Learn how to run and test your Upstreet Agent before going live. import Wrapper from 'components/preview/wrapper'; import Image from 'next/image'; import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'; * Upstreet SDK installed on your computer. [Install SDK](/install) * Logged in to SDK. Follow instructions to [log in here](/install#log-into-the-sdk). * A configured Upstreet Agent. [Create an Agent](/create-an-agent) Once your Agent is created, you can test its responses and behaviors before deployment. The Upstreet SDK makes testing straightforward, allowing you to interact directly with your Agent. ## Running a Test Session To start testing, run the following command in your terminal: ```bash usdk chat ``` Where `` is the **relative path** to the directory containing all your Agent's code. [How to create an Agent](/create-an-agent#file-structure) This command launches an interactive chat session (REPL-like) with your Agent, where you can input prompts and review responses in real-time. To exit the chat, type `.exit` in the chat and press the Enter key. Or, you can use the shortcut CTRL + C twice. AI inferences **do not** run locally and may consume Upstreet credits during testing. ### Hot reloading **Hot reloading** is supported by default, so while [customizing your Agent](/customize-your-agent), your Agent will immediately update in the chat once you save your code. ## Testing Tips * **Specific Task Testing:** Prompt your Agent to carry out the exact tasks or interactions you want to verify. * **Custom Test Cases:** To automate testing, write test cases using [Jest](https://jestjs.io/), ensuring consistency and reliability for complex Agent behaviors. file: ./content/docs/advanced/custom-voices.mdx # undefined: Custom Voices You can create and use custom voice models for your Agents. The `voice` commands in `usdk` allow you to manage custom voices for your agents. ### Available Commands #### 1. **List Voices** * **Command**: `voice list` * **Description**: Displays all voices associated with your account, including voice IDs and names. Use this information to configure the `voiceEndpoint` prop. *** #### 2. **Create a Voice** * **Command**: `voice create ` * **Requirements**: * An audio sample of the voice to clone. * Supported formats: **MP3**, **WAV**. * **Parameters**: * ``: The desired name for your voice. * ``: The path to your audio sample file. *** #### 3. **Test a Voice** * **Command**: `voice play "Your test message"` * **Description**: Allows you to hear how your created voice sounds with a sample message. *** See the entire command reference for `usdk` [here](/concepts/usdk/command-reference). file: ./content/docs/advanced/usdk-library.mdx # undefined: Using USDK programmatically Use the Upstreet Agents SDK as a library in your code. While the Upstreet Agents SDK (USDK) is designed as a [command-line interface](/concepts/usdk/overview) (CLI) for building, managing, and deploying Upstreet AI Agents, you can also import USDK as a module in your code to access its powerful functionalities programmatically. Using USDK programmatically gives you more control over the deployment and management of your agents. You can integrate it into your own workflows, automate tasks, or build more complex applications on top of Upstreet's platform. It also eliminates the need for shell execution, providing a clean and efficient way to interact with USDK features directly from your code. ## Installation To use it as a module in your Node.js application, install USDK locally in your project: ```bash npm install usdk ``` ## Import You can then import USDK into your Node.js scripts: ```tsx const usdk = require('usdk'); ``` ## Usage Once imported, you can interact with USDK's commands as JavaScript functions. Here's a basic example of how you can use USDK programmatically: ### Example: Log In ```javascript const usdk = require('usdk'); async function login() { try { await usdk.login(); console.log('Logged in successfully!'); } catch (error) { console.error('Login failed:', error); } } login(); ``` ### Example: Create an Agent ```javascript const usdk = require('usdk'); async function createAgent() { try { const agent = await usdk.create({ prompt: "Create a new agent", feature: ['chat'], force: true, json: '{"name":"MyAgent","type":"assistant"}' }); console.log('Agent created:', agent); } catch (error) { console.error('Error creating agent:', error); } } createAgent(); ``` ### Example: Deploy an Agent ```javascript const usdk = require('usdk'); async function deployAgent(agentId) { try { await usdk.deploy([agentId]); console.log('Agent deployed successfully!'); } catch (error) { console.error('Error deploying agent:', error); } } deployAgent('my-agent-id'); ``` ## Key Functions Available Here's a quick overview of the core functions you can use programmatically with USDK: * `usdk.login()`: Logs in to the USDK. * `usdk.logout()`: Logs out from USDK. * `usdk.create(options)`: Creates a new agent. * `usdk.edit(directory, options)`: Edits an existing agent. * `usdk.deploy(agentIds)`: Deploys an agent to the network. * `usdk.rm(agentIds)`: Removes a deployed agent. * `usdk.pull(agentId, options)`: Pulls the source code of a deployed agent. * `usdk.status()`: Retrieves the current login status and account details. * `usdk.chat(agentIds, options)`: Starts a multi-agent chat session. ### Example: Check Status ```javascript const usdk = require('usdk'); async function checkStatus() { try { const status = await usdk.status(); console.log('Current status:', status); } catch (error) { console.error('Error retrieving status:', error); } } checkStatus(); ``` file: ./content/docs/api/overview.mdx # undefined: Overview The Upstreet Agents SDK is the first React-based SDK for building and deploying headless AI agents, locally and in the cloud. import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; React Agents is a groundbreaking framework that brings the power and familiarity of [React](https://react.dev) to AI agent development. Built on React's [reconciliation engine](https://github.com/facebook/react/blob/main/packages/react-reconciler/README.md), it enables developers to create intelligent, autonomous agents using the same tools and patterns they love from React development. ## Quick Start ```jsx import { Agent, Action } from '@upstreet/agents'; import { z } from 'zod'; function SmartHomeAgent() { return ( { turnOnLights(); e.data.agent.monologue(`Lights changed: ${e.data.message.args.lightName}`); }} /> ); } ``` ## Core Concepts ### The React Agents Architecture React Agents leverages the [React Reconciler API](https://github.com/facebook/react/blob/main/packages/react-reconciler/README.md) to create a custom renderer specifically designed for AI agents. This architecture provides several key advantages: 1. **Server-First Design**: Optimized for server-side execution 2. **Platform Agnostic**: Ready for multi-platform and edge deployments 3. **Declarative**: Uses React's component model and lifecycle 4. **Type Safety**: Full TypeScript support throughout the stack ### Traditional React vs React Agents Let's compare how you'd implement similar functionality in traditional React versus React Agents: #### Traditional React ```jsx // User interface focused function LightControl() { const [lightName, setLightName] = useState(''); return (
turnOnLights(e.target.lightName.value)}> setLightName(e.target.value)} />
); } ``` #### React Agents ```jsx // Agent behavior focused function LightControlAgent() { return ( ); } ``` The key difference lies in the focus: traditional React components render user interfaces, while React Agents components render agent behaviors and capabilities. ## Core Components ### Agent The root component that initializes an agent instance. ```jsx {/* Agent actions and behaviors */} ``` ### Action Defines discrete capabilities that an agent can perform. ```jsx ``` ## How It Works React Agents operates through a sophisticated pipeline: 1. **JSX Transformation**: Your component code is transformed into an agent execution plan 2. **Prompt Generation**: The execution plan is converted into a series of prompts 3. **Chain-of-Thought Runtime**: Prompts are processed through a reasoning engine 4. **Action Execution**: The agent performs actions based on its reasoning 5. **State Updates**: Results are reconciled back through React's system ## Best Practices ### 1. Schema Definition Always define precise schemas for your actions: ```jsx const schema = z.object({ action: z.enum(['on', 'off']), device: z.string(), room: z.string().optional() }); ``` ### 2. Provide Examples Include clear examples for better agent understanding: ```jsx const examples = [ { action: 'on', device: 'lights', room: 'living room' }, { action: 'off', device: 'thermostat' } ]; ``` ### 3. Error Handling Implement robust error handling in your handlers: ```jsx const handler = async (e) => { try { await performAction(e.data.message.args); e.data.agent.monologue('Action completed successfully'); } catch (error) { e.data.agent.monologue(`Error: ${error.message}`); } }; ``` ## Command Reference For a complete list of available commands and their usage, refer to our [Command Reference Guide](/concepts/usdk/command-reference). ## Advanced Topics ### Custom Renderers Create specialized renderers for unique use cases: ```jsx const customRenderer = createAgentRenderer({ supportedEvents: ['custom.event'], transformEvent: (event) => ({ type: 'custom.event', payload: event }) }); ``` ### State Management Integrate with existing React state management solutions: ```jsx function AgentWithState() { const [state, dispatch] = useReducer(reducer, initialState); return ( {/* Agent components */} ); } ``` ## Resources * [GitHub Repository](https://github.com/upstreet/react-agents) * [Components](/api/components) * [Examples](/examples) * [Community Discord](https://upstreet.ai/usdk-discord) file: ./content/docs/concepts/defining-agent-objectives.mdx # undefined: Defining Agent Objectives Defining effective objectives for AI agents is key to ensuring they operate successfully, maintain consistency, and achieve desired outcomes with minimal oversight. import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; When setting up objectives for an Agent created with the Upstreet Agents SDK, several essential strategies and considerations come into play to keep agents focused, accurate, and adaptable. ### 1. **Prioritize Clarity and Conciseness** * **Be Specific**: Agents work best when objectives are straightforward and focused. Specific instructions reduce the risk of misinterpretation, ensuring the agent remains aligned with its purpose. For instance, instead of an objective like "Assist with customer support," a more precise goal might be "Respond to common customer inquiries related to product features." Upstreet’s SDK anticipates scalability with plans to "chain" multiple agents and create "Swarms," which would enable more complex, large-scale task execution by dividing tasks across specialized agents. ### 2. **Use Popular Language and References** * **Leverage Common Language**: Agents trained on vast datasets are more likely to succeed when objectives are phrased using widely recognized language and references. This approach minimizes misunderstandings and helps prevent hallucinations—where an agent might invent facts or information. A clear, familiar language ensures the agent interprets the task accurately, particularly in high-stakes or nuanced scenarios. Upstreet recommends this practice to enhance agent reliability, especially when the stakes require concrete and verifiable outputs. ### 3. **Establish Guardrails to Manage Scope and Output** * **Define Boundaries**: Guardrails in agent objectives are like invisible parameters that guide behavior and task limits, ensuring agents stay on task without deviating into irrelevant or erroneous outputs. Guardrails could involve setting response limits, restricting information sources, or applying criteria for acceptable answers. In Upstreet’s Agents SDK, guardrails are integral to managing agent outputs in real-time and ensuring consistency across tasks, improving both reliability and accuracy. ### 4. **Embed Contextual Awareness** * **Use Environmental or Historical Context**: By embedding context into objectives—such as past interactions or environmental constraints—agents can make more nuanced, intelligent decisions. For example, an agent tasked with sales follow-ups can perform better if its objectives incorporate prior client interactions. Upstreet’s SDK allows developers to set up memory components, helping agents recall previous tasks or user preferences, enriching engagement continuity and accuracy over time. ### 5. **Prioritize Adaptability with Modular Objectives** * **Design Objectives to Support Flexibility**: Flexible objectives make it easier for agents to adapt to unexpected inputs or changing requirements. Setting modular objectives, or objectives broken down into smaller, adaptable steps, allows agents to respond dynamically without overextending beyond their initial goal. This modular approach can be particularly powerful within Upstreet’s framework, as agents may be part of a larger "Swarm" that collaborates to address broader tasks in smaller, manageable parts, each with clear, adaptable sub-objectives. ### 6. **Incorporate Iterative Feedback Mechanisms** * **Feedback for Continuous Improvement**: Objectives that include a feedback mechanism help agents learn from performance, refine behavior, and correct errors. By defining objectives to accommodate iterative feedback, developers can continually adjust agent responses based on accuracy and effectiveness, a feature that Upstreet emphasizes for optimal agent performance. Regular feedback-driven updates to objectives ensure agents evolve, enhancing their precision and resilience over time. ### 7. **Balance Ambition with Realism** * **Set Realistic, Incremental Goals**: Ambitious objectives are ideal, but overly complex or vague goals can hinder an agent’s effectiveness. Defining objectives that incrementally build an agent’s skills while maintaining realism will help it reach ambitious targets with measurable success. Upstreet’s approach to agent objectives encourages a balanced mix of challenge and achievability, setting the foundation for agents to evolve while still achieving practical results. By using these strategies in tandem, developers can craft agent objectives that are not only clear and achievable but also designed to scale with the agent’s capabilities. Upstreet’s Agents SDK integrates these practices by enabling tailored configurations, memory usage, guardrails, and modular tasks that allow agents to operate with consistency, transparency, and effectiveness. Through thoughtful objective-setting, AI agents can reach their potential, becoming reliable digital collaborators for complex tasks. file: ./content/docs/concepts/memories.mdx # undefined: Memories One of the key components of a modern Agent is its capability to retain information. import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; For an Agent to execute tasks effectively and maintain a coherent story over time, memory plays a critical role. Memory systems in AI—structured as short-term, long-term, and sometimes episodic memories—allow an Agent to recall context, adjust behavior, and make decisions that align with past interactions and future objectives. By storing and retrieving memories, an Agent can create a seamless user experience that feels both personalized and consistent. ### 1. **Short-Term Memory for Immediate Task Execution** * **Role of Working Memory**: In the context of task execution, short-term or *working memory* functions as a dynamic space for storing immediate goals, recent instructions, and current conversation context. Short-term memory is crucial for understanding and processing information within a specific interaction or task window, enabling Agents to handle multi-step instructions or retain context across dialogue turns. In cognitive science, working memory allows humans to “keep track” of ongoing tasks, a concept explored in AI by systems like ACT-R, where temporary storage of task-relevant information enhances processing efficiency ([source](https://www.sciencedirect.com/science/article/pii/S136466139901357X)). * **Maintaining Context Across Dialogues**: With short-term memory, Agents can seamlessly carry context across different user inputs without needing constant reminders, improving naturalness and reducing repetitive back-and-forth. This is especially useful in complex, multi-turn interactions, such as customer service queries or technical troubleshooting, where the Agent must remember recent points to avoid losing coherence. ### 2. **Long-Term Memory for Personalization and Continuity** * **Personality and Long-Term Adaptation**: Long-term memory is vital for Agents designed to create a personalized experience over extended periods. By storing persistent user preferences, past interactions, or completed tasks, an Agent can adapt its responses based on accumulated history. This approach parallels human memory in preserving key information, allowing Agents to reference past events naturally and create a continuous narrative. Cognitive architectures like Soar and systems using [semantic memory](https://soar.eecs.umich.edu/) integrate long-term storage for knowledge that persists across sessions, making the interaction feel more consistent and personalized. * **Consistency Across Sessions**: For Agents designed to maintain a cohesive personality or narrative, long-term memory acts as the backbone of their “identity,” supporting behavior that feels stable over time. By referencing stored memories, Agents can act on past user interactions or remind the user of past discussions, creating continuity in storytelling, instructional tasks, or social interactions. This aligns with findings in episodic memory research, where maintaining a memory of past experiences contributes to self-coherence and decision-making ([source](https://doi.org/10.1146/annurev.psych.52.1.1)). ### 3. **Episodic Memory for Storytelling and Emotional Depth** * **Enriching Narrative Interactions**: Episodic memory, which encodes specific events and their emotional context, is key to creating richer narrative interactions. By allowing an Agent to “remember” detailed scenarios—such as moments of excitement, conflict, or resolution—Agents can craft responses that feel personalized and meaningful. Episodic memories empower Agents to recall unique experiences in user interactions, making them capable of storytelling or evoking emotions based on shared history. In NPC development, episodic memory has been used to create narrative depth by letting characters refer back to story events, thereby enhancing player immersion ([source](https://www.sciencedirect.com/science/article/pii/S1574119207801041)). * **Enhancing Emotional Connection**: When an Agent can “recall” a user’s past experiences or reference prior interactions, it can form a more authentic emotional connection, as it reflects empathy and relational memory. For example, an Agent might remember a past user frustration or successful outcome and integrate that knowledge to adjust its responses, just as humans adapt their behavior based on personal history with others. Research on emotional AI suggests that episodic memory can make interactions more relatable and impactful by enabling Agents to mirror human empathy and relational memory ([source](https://arxiv.org/abs/2006.06088)). ### 4. **Procedural Memory for Skill Development and Efficiency** * **Learning from Repeated Actions**: Procedural memory allows an Agent to retain knowledge of actions and skills learned through repetition, thus enhancing task efficiency and adaptability. By developing a procedural memory, an Agent can remember “how to” perform tasks without explicit prompts, as it learns to refine actions based on past performance. This approach has been applied in skill-based AI where repeated behaviors improve task execution, such as in robotics or customer service bots where learned actions are stored and reused for efficiency ([source](https://doi.org/10.1146/annurev.neuro.24.1.167)). * **Improving Accuracy in Task Execution**: By recalling procedural memories, Agents can handle complex tasks more accurately, as the learned sequences guide their actions. Procedural memory enhances the Agent's ability to execute tasks in familiar contexts without needing full re-instruction, which can reduce error rates and improve consistency in workflow management or technical tasks. ### 5. **Blending Memories for Holistic Interactions** * **Integrated Memory Systems**: Combining short-term, long-term, episodic, and procedural memories creates a layered memory system that supports rich, multi-dimensional interactions. For example, short-term memory maintains the current dialogue context, while long-term and episodic memories preserve user preferences and emotional experiences, respectively. As described in [*Planning and Acting in a Dynamic Environment: A Cognitive Systems Approach*](https://www.springer.com/gp/book/9783319329791), integrated memory systems allow AI to mimic the natural balance humans strike between immediate needs and stored knowledge, supporting both real-time adaptability and continuity in longer interactions. By employing these diverse memory types, an Agent not only enhances task execution through recall and skill refinement but also maintains a coherent narrative that deepens user engagement. This multi-memory approach brings an Agent closer to human-like cognitive functioning, enabling it to remember, adapt, and personalize interactions across diverse contexts. file: ./content/docs/concepts/personalities.mdx # undefined: Personalities A step towards AGI involves making AI more believable and relatable for humans. import FullBg from '../../../components/full-bg'; import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; Imbuing an **Agent** with a personality involves structuring its internal architecture to not only respond to tasks and prompts but to generate consistent, personality-based responses informed by a “bio” and “description.” These can provide baseline identity details, emotional tendencies, and preferred behavioral patterns that reflect both *internal thoughts* and *social interactions.* ### 1. **Baseline Identity and Bio Setup** * **Bio and Description as Personality Anchors**: The concept of a bio and description offers the foundation for an agent’s “personality.” Just as humans have inherent characteristics and histories that shape responses and behaviors, an AI bio provides initial personality traits—such as openness, agreeableness, or conscientiousness—guided by psychological models like the **Five-Factor Model**. This model can structure an agent's responses to align with human personality nuances, as explored by Li & MacDonnell (2008) in their NPC personality engine, where traits impact interaction styles and decision-making patterns ([source](https://spectrum.library.concordia.ca/979942/1/MQ94694.pdf)). ### 2. **Thought Generation and Self-Reflection** * **Internal Thoughts Through TPO**: The concept of **Thought Preference Optimization (TPO)**, as introduced in [*Thinking LLMs: General Instruction Following with Thought Generation*](https://arxiv.org/pdf/2410.10630), helps to develop internal thought processes in language models. TPO trains agents to generate "internal thoughts" before external actions or responses, mimicking human reflection. With TPO, agents gain the ability to internally assess a situation, match it to their personality "bio," and decide on an action that aligns with both the task and their designed personality. ### 3. **Behavioral Consistency and Emotional Responses** * **Memory-Influenced Behavior**: Cognitive architectures, such as the [Soar cognitive architecture](https://soar.eecs.umich.edu/), introduce memory models that simulate how humans draw on past experiences to shape reactions. Soar defines **episodic, procedural, and semantic memories** that influence decisions and can make agent responses more predictable yet personal by embedding an emotional or experience-driven context. This framework allows agents to remember prior interactions, which in turn influences their “mood” or preferred responses in future interactions. ### 4. **Thought-Driven Emotional Modelling** * **Emotional Layers in NPC Models**: Emotion models similar to those used for **Non-Player Characters (NPCs)** incorporate an emotional layer that adapts based on environmental interactions and personal history, such as reacting with “shock,” “love,” or “anger.” For instance, in NPC personality engines like those described by Mac Namee, agents can utilize emotions based on the context of the interaction, making reactions feel more genuine ([source](https://www.scss.tcd.ie/publications/tech-reports/reports.04/TCD-CS-2004-58.pdf)). An agent’s “bio” can define its emotional tendencies, while its “description” can refine the triggers and intensity of these emotional responses, leading to consistent yet varied emotional engagement based on prior interactions and internal thoughts. ### 5. **Meta-Reasoning for Thoughtful Decisions** * **Meta-Prompted Reasoning Structures**: In [*SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures*](https://arxiv.org/pdf/2305.00833), meta-prompts guide an agent’s decision-making by breaking tasks into structured reasoning steps, allowing agents to select, adapt, and implement actions with layered thoughtfulness. This framework equips agents to integrate their personality traits and make decisions that align with both practical objectives and their defined personality—such as a “thoughtful” or “cautious” persona. By embedding a structured bio, internal thought processing through TPO, emotional layers, and memory-influenced reasoning, **agents evolve into “digital beings”** capable of expressing a believable personality. These advancements mirror ongoing explorations into artificial life and digital consciousness, paving the way for agents to interact as personalized digital entities with a persistent identity and realistic self-reflective capabilities. This approach aligns with computational life principles, where an agent’s evolution and replication could parallel biological systems in their capacity to learn, adapt, and replicate experiences over time. file: ./content/docs/concepts/what-are-agents.mdx # undefined: What are Agents? Agents are a classical example of how AI interacts with the world. import FullBg from '../../../components/full-bg'; import { InlineTOC } from 'fumadocs-ui/components/inline-toc'; ## The Classic definition An **Agent** in classical AI refers to any entity capable of 3 fundamental attributes: 1. **Observing** its environment through sensors, 2. **Processing** that information, and 3. **Acting** upon the environment to achieve specific goals. The concept draws from Alan Turing’s early work on computational machinery, where he envisioned machines capable of following instructions and performing tasks autonomously. His groundbreaking ideas laid the foundation for later development in agent-oriented AI, where agents are tasked with specific objectives and execute actions autonomously to achieve those objectives, mimicking human-like decision-making and problem-solving. ## Agents, today Over time, **AI agents** evolved from straightforward rule-based systems to more dynamic, complex entities powered by machine learning. Modern agents are sophisticated programs or models that interact with the environment, adapt to user needs, and perform increasingly complex tasks. They encompass varying levels of autonomy, reasoning, and learning capabilities and are commonly used for information retrieval, automation of repetitive tasks, and even decision-making. **Contemporary AI agents** have extended their functions further. They leverage large language models (LLMs), decision-making frameworks, and even multi-agent systems to break down complex tasks into manageable parts and accomplish them more effectively. These agents are often designed to learn continually, retain context, and respond to nuanced prompts, approaching some characteristics of **Artificial General Intelligence (AGI)**—such as reasoning and adaptability—without fully achieving it. ## The future of Agents and human-like digital beings The **next generation of agents**, powered by advancements in cognitive architectures and emotion-modelling frameworks, is aiming for even higher levels of adaptability and social-emotional awareness. Here’s where the concept of **digital beings** emerges. These agents are designed not only to complete tasks but to embody **personalities, emotions, and memories** that allow them to interact with people and their environments in a lifelike way. These agents may possess “personalities” grounded in psychological models like the Five-Factor Model (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism), enabling them to exhibit behavior that resonates with human emotional and social cues. As they develop through interactions, they gain experience, adapt to new contexts, and exhibit responses based on accumulated knowledge and "personal" characteristics, which would mark a significant step toward AGI. Ultimately, these agents could embody **digital beings** with a kind of “self” that influences their interactions, decision-making, and potentially even self-replication—a progression that mirrors the evolutionary dynamics seen in biological life, akin to the ideas of **Computational Life**. With these advancements, agents could transcend single-use utility, acting instead as intelligent, evolving, and emotionally aware counterparts capable of complex reasoning and forming human-like connections. file: ./content/docs/errors/index.mdx # undefined: Error List When interacting with Upstreet Agents Platform and CLI, you may face an error. This page contains as many errors as we have documented so far. **Found an issue** which isn't documented? Head over to [our public Discord Community](https://upstreet.ai/usdk-discord) to log the issue. For more in-depth help, you can reach out to us at [support@upstreet.ai](mailto:support@upstreet.ai). *** ## Could not run the speaker module This error may occur when running `usdk chat` locally. The CLI uses operating system-specific audio output backends to play audio, and installing the necessary dependencies may be required to enable audio functionality in the CLI. To resolve this, follow the steps below: ### Step 1: Install System Build Tools Some operating systems require specific build tools to compile and run audio modules: * **Linux (Ubuntu)**: You may need to run: ```bash sudo apt-get install libasound2-dev ``` Reference: https://github.com/TooTallNate/node-speaker?tab=readme-ov-file#installation * **macOS**: Install Xcode Command Line Tools. Run the following command: ```bash xcode-select --install ``` * **Windows**: Install Visual Studio Build Tools with **C++ environment support**. ### Step 2: Reinstall `usdk` Globally After installing the required system build tools, reinstall `usdk` globally to ensure that all dependencies are properly configured: ```bash npm install -g usdk ``` *** ## Running `usdk` in PowerShell When running `usdk` in PowerShell, you might encounter an error related to the Node.js directory. PowerShell has a known issue with the Node.js installation path, which may prevent `usdk` from running smoothly. ### Solution 1: Switch to CMD As a quick workaround, consider using the Command Prompt (CMD) instead of PowerShell to run `usdk`: 1. Open a new terminal. 2. Select **Command Prompt** instead of **PowerShell**. 3. Run your `usdk` commands in CMD. ### Solution 2: Set Prefix in .npmrc File To configure Node.js to work in PowerShell, you can specify the installation path using the `.npmrc` file. This approach sets an explicit prefix path, helping to resolve any directory issues in PowerShell. #### Steps to Set the Prefix: 1. Open PowerShell. 2. Run the following command to open your `.npmrc` file in Notepad: ```powershell Notepad "$env:USERPROFILE\.npmrc" ``` 3. In the `.npmrc` file, add the following line to set the Node.js prefix: ```plaintext prefix = "C:/Program Files/nodejs" ``` 4. Save the file and restart PowerShell. After following these steps, PowerShell should recognize the Node.js installation path correctly, allowing `usdk` to run without directory-related issues. file: ./content/docs/examples/action-agent.mdx # undefined: Action Agent (Personal Assistant) This section describes how to build your own personal assistant Agent with Upstreet and the Google Calendar API, using custom React Hooks. In this guide, we build an **Action Agent** capable of scheduling events on our Google Calendar for us, using the [Google Calendar API](https://developers.google.com/Calendar/api/guides/overview). We use [custom React Hooks](https://react.dev/learn/reusing-logic-with-custom-hooks) in this example - We want to use nice, clean coding practices. We define an **Action Agent** as an Agent which can take actions on behalf of you. The source code for this example is available on [GitHub](https://github.com/UpstreetAI/usdk-examples/tree/main/personalAssistant). {/* ## Video Tutorial You can follow along this example by watching the video below: */} ## Guide ### Step 1: Setup `usdk` Follow *[Setup the SDK](/install)* to set up NodeJS and `usdk`. ### Step 2: Initialize your agent Create a new agent: ```bash usdk create -y ``` This will directly scaffold an agent for you in ``. [Learn more](/create-an-agent#file-structure) Your agent directory now contains the Node application and `git` repository for your agent, should you choose to use `git`. The `-y` flag means to skip the [Agent Interview](/create-an-agent#step-2-complete-the-agent-interview) process, which we don't need here. You can also omit the agent directory. In that case, a directory will be created for you. ### Step 3: Create a `PersonalAssistant` Component Why manage our calendar manually when an AI agent can handle the task for us? We can easily build an Upstreet Agent to handle Calendar management, reducing scheduling conflicts and saving time. This example, however, will be very simple. We want our Agent to be able to schedule a Google Calendar Event for us. ```tsx title="agent.tsx" const PersonalAssistant = () => { // We'll add functions, useState, useEffect here return <>{/* We can add components here to compose our Agent */} } ``` The `PersonalAssistant` component is just an empty wrapper component for now - it will later utilize a `GoogleCalenderManager` class to interact with the Google Calendar API, allowing users to create Calendar events programmatically. ### Step 4: Using custom Hooks and better practices In `agent-renderer.tsx` file, inside the `AgentRenderer` class, we can make a [custom Hook](https://react.dev/learn/reusing-logic-with-custom-hooks) called `useCalendarKeysJson`: ```tsx title="/packages/upstreet-agent/packages/react-agents/classes/agent-renderer.tsx" const useEnvironment = () => { return (env as any).WORKER_ENV as string } // place here below useEnvironment function const useCalendarKeysJson = () => { // [!code ++] const CalenderKeysJsonString = (env as any).CALENDER_KEYS_JSON as string // [!code ++] const CalenderKeysJson = JSON.parse(CalenderKeysJsonString) // [!code ++] return CalenderKeysJson // [!code ++] } // [!code ++] ``` In the same file, there is `AppContextValue` mention. Make the below modification in your code. ```tsx this.appContextValue = new AppContextValue({ subtleAi, agentJson: useAgentJson(), CalenderKeysJson: useCalendarKeysJson(), // [!code ++] environment: useEnvironment(), wallets: useWallets(), authToken: useAuthToken(), supabase: useSupabase(), conversationManager: useConversationManager(), chatsSpecification: useChatsSpecification(), codecs: useCodecs(), registry: useRegistry() }) ``` Now make some changes in `app-value-context.tsx` file's `AppContextValue` class: ```tsx title="/packages/upstreet-agent/packages/react-agents/classes/app-context-value.ts" export class AppContextValue { subtleAi: SubtleAi agentJson: object calendarKeysJson: object // [!code ++] // other code remain same constructor({ subtleAi, agentJson, CalenderKeysJson // [!code ++] // other code remain same }: { subtleAi: SubtleAi agentJson: object CalenderKeysJson: object // [!code ++] // other code remain same }) { this.subtleAi = subtleAi this.agentJson = agentJson this.CalenderKeysJson = CalenderKeysJson // [!code ++] // other code remain same } } ``` In the same file, add the`useCalendarKeysJson` custom hooks: ```tsx useAgentJson() { return this.agentJson; } useCalendarKeysJson() { // [!code ++] return this.CalenderKeysJson; // [!code ++] } // [!code ++] // other code remain same ``` Now add `useCalendarKeysJson` in `hooks.ts` file. ```tsx title="/packages/upstreet-agent/packages/react-agents/hooks.ts" export const useAgent = () => { const agentContextValue = useContext(AgentContext) return agentContextValue } export const useCalendarKeysJson = () => { // [!code ++] const agentContextValue = useContext(AgentContext) // [!code ++] return agentContextValue.appContextValue.useCalendarKeysJson() // [!code ++] } // [!code ++] // other code remain same ``` You can now use `useCalendarKeysJson` as a Hook in your PersonalAssistant Component. ### Step 5: Integrating the Google Calendar API Let's build our `GoogleCalendarManager`, which will leverage a service account for authentication and handling token generation, event creation, and error handling. *** First, you'll need some Google Calendar API credentials: * Calendar ID * API Key * Service Account Email * Private Key 🔑 Need help getting these? Check out the [Google Calendar API docs](https://developers.google.com/calendar/api/guides/overview). Add them to your `wrangler.toml`: ```toml title="wrangler.toml" ... CALENDER_KEYS_JSON = "{\"GOOGLE_API_KEY\":\"\",\"GOOGLE_SERVICE_ACCOUNT_EMAIL\":\"\",\"GOOGLE_PRIVATE_KEY\":\"",\"GOOGLE_CALENDAR_ID\":\"\"}" ... ``` Let's get back to the code. *** This code provides an integration with the Google Calendar API by implementing a class called `GoogleCalenderManager`. ```tsx title="agent.tsx" // Import all the required modules import { // [!code ++] Action, // [!code ++] Agent, // [!code ++] PendingActionEvent, // [!code ++] useCalendarKeysJson // [!code ++] } from 'react-agents' // [!code ++] import { z } from 'zod' // [!code ++] // integrating the Google Calendar API interface CalenderEvent {// [!code ++] summary: string // [!code ++] description: string // [!code ++] start: { dateTime: string } // [!code ++] end: { dateTime: string } // [!code ++] } // [!code ++] class GoogleCalenderManager {// [!code ++] private readonly GOOGLE_Calender_ID: string // [!code ++] private readonly GOOGLE_API_KEY: string // [!code ++] private readonly GOOGLE_SERVICE_ACCOUNT_EMAIL: string // [!code ++] private readonly GOOGLE_PRIVATE_KEY: string // [!code ++] constructor({ // [!code ++] GOOGLE_Calender_ID, // [!code ++] GOOGLE_API_KEY, // [!code ++] GOOGLE_SERVICE_ACCOUNT_EMAIL, // [!code ++] GOOGLE_PRIVATE_KEY // [!code ++] }: {// [!code ++] GOOGLE_Calender_ID: string // [!code ++] GOOGLE_API_KEY: string // [!code ++] GOOGLE_SERVICE_ACCOUNT_EMAIL: string // [!code ++] GOOGLE_PRIVATE_KEY: string // [!code ++] }) {// [!code ++] this.GOOGLE_Calender_ID = GOOGLE_Calender_ID // [!code ++] this.GOOGLE_API_KEY = GOOGLE_API_KEY // [!code ++] this.GOOGLE_SERVICE_ACCOUNT_EMAIL = GOOGLE_SERVICE_ACCOUNT_EMAIL // [!code ++] this.GOOGLE_PRIVATE_KEY = GOOGLE_PRIVATE_KEY // [!code ++] } // [!code ++] private async getAccessToken(): Promise {// [!code ++] const now = Math.floor(Date.now() / 1000) // [!code ++] const expiry = now + 3600 // Token valid for 1 hour // [!code ++] const jwtHeader = btoa(JSON.stringify({ alg: 'RS256', typ: 'JWT' })) // [!code ++] const jwtClaimSet = btoa(// [!code ++] JSON.stringify({// [!code ++] iss: this.GOOGLE_SERVICE_ACCOUNT_EMAIL, // [!code ++] scope: 'https://www.googleapis.com/auth/Calendar', // [!code ++] aud: 'https://oauth2.googleapis.com/token', // [!code ++] exp: expiry, // [!code ++] iat: now // [!code ++] }) // [!code ++] ) // [!code ++] const signatureInput = `${jwtHeader}.${jwtClaimSet}` // [!code ++] const signature = await this.signJwt(signatureInput) // [!code ++] const jwt = `${signatureInput}.${signature}` // [!code ++] const tokenResponse = await fetch('https://oauth2.googleapis.com/token', {// [!code ++] method: 'POST', // [!code ++] headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, // [!code ++] body: `grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=${jwt}` // [!code ++] }) // [!code ++] const tokenData = await tokenResponse.json() // [!code ++] return tokenData.access_token // [!code ++] } // [!code ++] private async signJwt(input: string): Promise {// [!code ++] const pemHeader = '-----BEGIN PRIVATE KEY-----' // [!code ++] const pemFooter = '-----END PRIVATE KEY-----' // [!code ++] const pemContents = this.GOOGLE_PRIVATE_KEY.substring(// [!code ++] this.GOOGLE_PRIVATE_KEY.indexOf(pemHeader) + pemHeader.length, // [!code ++] this.GOOGLE_PRIVATE_KEY.indexOf(pemFooter) // [!code ++] ).replace(/\s/g, '') // [!code ++] const binaryDer = this.base64StringToArrayBuffer(pemContents) // [!code ++] const cryptoKey = await crypto.subtle.importKey(// [!code ++] 'pkcs8', // [!code ++] binaryDer, // [!code ++] {// [!code ++] name: 'RSASSA-PKCS1-v1_5', // [!code ++] hash: 'SHA-256' // [!code ++] }, // [!code ++] false, // [!code ++] ['sign'] // [!code ++] ) // [!code ++] const encoder = new TextEncoder() // [!code ++] const signatureBuffer = await crypto.subtle.sign( // [!code ++] 'RSASSA-PKCS1-v1_5', // [!code ++] cryptoKey, // [!code ++] encoder.encode(input) // [!code ++] ) // [!code ++] const signatureArray = new Uint8Array(signatureBuffer) // [!code ++] return btoa(String.fromCharCode.apply(null, signatureArray)) // [!code ++] .replace(/=/g, '') // [!code ++] .replace(/\+/g, '-') // [!code ++] .replace(/\//g, '_') // [!code ++] } // [!code ++] private base64StringToArrayBuffer(base64: string): ArrayBuffer {// [!code ++] const binaryString = atob(base64) // [!code ++] const bytes = new Uint8Array(binaryString.length) // [!code ++] for (let i = 0; i < binaryString.length; i++) { // [!code ++] bytes[i] = binaryString.charCodeAt(i) // [!code ++] } // [!code ++] return bytes.buffer // [!code ++] } // [!code ++] async setCalenderEvent(event: CalenderEvent): Promise { // [!code ++] console.log('Creating event:', event) // [!code ++] const accessToken = await this.getAccessToken() // [!code ++] const response = await fetch( // [!code ++] `https://www.googleapis.com/Calendar/v3/Calenders/${this.GOOGLE_Calender_ID}/events?key=${this.GOOGLE_API_KEY}`, // [!code ++] { // [!code ++] method: 'POST', // [!code ++] headers: {// [!code ++] Authorization: `Bearer ${accessToken}`, // [!code ++] 'Content-Type': 'application/json' // [!code ++] }, // [!code ++] body: JSON.stringify(event) // [!code ++] } // [!code ++] ) // [!code ++] console.log(response) // [!code ++] if (!response.ok) { // [!code ++] const errorText = await response.text() // [!code ++] throw new Error(`Failed to create event: ${errorText}`) // [!code ++] } // [!code ++] const result = await response.json() // [!code ++] console.log('Event created:', result) // [!code ++] return `Event created: ${result.htmlLink}` // [!code ++] } // [!code ++] } // [!code ++] ``` #### Breakdown summary of the `GoogleCalenderManager` Class and its functions 1. **Constructor:** Initializes the GoogleCalenderManager with Google API credentials (GOOGLE\_Calender\_ID, GOOGLE\_API\_KEY, GOOGLE\_SERVICE\_ACCOUNT\_EMAIL, GOOGLE\_PRIVATE\_KEY). 2. **getAccessToken:** Generates an OAuth2 access token using a JWT for authorizing Google Calendar API requests. 3. **signJwt:** Signs a JSON Web Token (JWT) using the private key for secure authorization. 4. **base64StringToArrayBuffer:** Converts a base64-encoded string into an ArrayBuffer, which is used for cryptographic operations. 5. **setCalenderEvent:** Posts a new event to the specified Google Calendar using the access token and provided event details. ### Step 6: Initialize the GoogleCalenderManager instance Now let's modify the `PersonalAssistant` component. In the code snippet, the credentials are being fetched using `useCalendarKeysJson()` and are used to initialize the GoogleCalenderManager instance. ```tsx title="agent.tsx" const PersonalAssistant = () => { // [!code ++] // get credentials from wrangler.toml const CalenderKeysJson = useCalendarKeysJson() // [!code ++] const googleCalenderManager = new GoogleCalenderManager({ // [!code ++] GOOGLE_Calender_ID: CalenderKeysJson.GOOGLE_Calender_ID, // [!code ++] GOOGLE_API_KEY: CalenderKeysJson.GOOGLE_API_KEY, // [!code ++] GOOGLE_SERVICE_ACCOUNT_EMAIL: CalenderKeysJson.GOOGLE_SERVICE_ACCOUNT_EMAIL, // [!code ++] GOOGLE_PRIVATE_KEY: CalenderKeysJson.GOOGLE_PRIVATE_KEY // [!code ++] }) // [!code ++] return <>{/* We can add components here to compose our Agent */} } ``` Now we'll use the `` tag to define how the Agent should respond to the default text perception. ```tsx return ( <> { // [!code ++] const { summary, description, startDateTime, endDateTime } = e.data // [!code ++] .message.args as { // [!code ++] summary: string // [!code ++] description: string // [!code ++] startDateTime: string // [!code ++] endDateTime: string // [!code ++] } // [!code ++] const event = {// [!code ++] summary, // [!code ++] description, // [!code ++] start: { dateTime: startDateTime }, // [!code ++] end: { dateTime: endDateTime } // [!code ++] } // [!code ++] await googleCalenderManager.setCalenderEvent(event) // [!code ++] }} // [!code ++] /> ) ``` #### Breakdown summary of this `` Component 1. **Purpose of the `` Component** `` components define specific actions that your agent can perform in response to user inputs. [Learn more](/api/agent/action) This component is used to define a specific action that can be triggered, in this case, **setting an event in Google Calendar**. 2. **Defining Action Properties** Each `` is structured with the following properties: * **`name`**: A unique identifier for the action. Example: `'setCalenderEvent'`. * **`description`**: Explains what the action does. In this case, it sets a new event in the user's Google Calendar. * **`schema`**: Specifies the input structure for the action, defined using a `zod` schema. The schema expects the event's summary (`summary`), start date and time (`startDateTime`), end date and time (`endDateTime`), and a description (`description`), all of which must be strings. * **`examples`**: Provides sample inputs to guide the agent’s behavior. Example: `{ summary: 'Meeting with John Doe', startDateTime: '2023-06-15T10:00:00-07:00', endDateTime: '2023-06-15T11:00:00-07:00', description: 'Discuss project timeline and requirements.' }`. 3. **`handler`: The Action's Core Logic** The handler function is the core of the action. This function contains the logic that will be executed when the action is triggered. In this case, the action is to create a new event in the user's Google Calendar. Here's a breakdown: * **PendingActionEvent**: The handler receives an event object of type PendingActionEvent, which contains all the data and context for the action being triggered. This event object has a data field, which holds the `message.args`. The args will contain the arguments passed when the action was triggered. * **Destructuring**: Inside the handler, the event data (e.data.message.args) is destructured into the specific fields: `summary`, `description`, `startDateTime`, and `endDateTime`. These correspond to the values passed when the action was triggered. * **Event Creation**: Once the necessary data is extracted, an event object is created. This object is structured according to the Google Calendar API's expected format: * **Calling googleCalenderManager.setCalenderEvent**: The googleCalenderManager.setCalenderEvent(event) method is then called to create the event in Google Calendar. This method is asynchronous, so the await keyword is used to ensure that the event is created before proceeding. ### Step 7: Test out your PersonalAssistant Agent! You can run `usdk chat` to test it out. [Learn more](/test-your-agent) You can ask it questions like: > Schedule my meeting with Steve on 15 November at 10 PM. The source code for this example is available on [GitHub](https://github.com/UpstreetAI/usdk-examples/tree/main/personalAssistant). Share its response in our [Discord community](https://upstreet.ai/usdk-discord); we'd love to know what it responds to you. file: ./content/docs/examples/discord-agent.mdx # undefined: Discord Agent This section describes how to build an Agent which can talk in Discord voice and chat. In this guide, we build an Agent named "Nimbus" which can talk on [**Discord**](https://discord.com) - It would be able to discuss hot topics in text channels, and even be able to talk in voice channels. The source code for this example is available on [GitHub](https://github.com/UpstreetAI/usdk-examples/tree/main/discordAgent). ## Video Tutorial You can follow along this example by watching the video below: