1
0
Fork 0
mirror of https://github.com/maybe-finance/maybe.git synced 2025-07-24 15:49:39 +02:00

Personal finance AI (v1) (#2022)

* AI sidebar

* Add chat and message models with associations

* Implement AI chat functionality with sidebar and messaging system

- Add chat and messages controllers
- Create chat and message views
- Implement chat-related routes
- Add message broadcasting and user interactions
- Update application layout to support chat sidebar
- Enhance user model with initials method

* Refactor AI sidebar with enhanced chat menu and interactions

- Update sidebar layout with dynamic width and improved responsiveness
- Add new chat menu Stimulus controller for toggling between chat and chat list views
- Improve chat list display with recent chats and empty state
- Extract AI avatar to a partial for reusability
- Enhance message display and interaction styling
- Add more contextual buttons and interaction hints

* Improve chat scroll behavior and message styling

- Refactor chat scroll functionality with Stimulus controller
- Optimize message scrolling in chat views
- Update message styling for better visual hierarchy
- Enhance chat container layout with flex and auto-scroll
- Simplify message rendering across different chat views

* Extract AI avatar to a shared partial for consistent styling

- Refactor AI avatar rendering across chat views
- Replace hardcoded avatar markup with a reusable partial
- Simplify avatar display in chats and messages views

* Update sidebar controller to handle right panel width dynamically

- Add conditional width class for right sidebar panel
- Ensure consistent sidebar toggle behavior for both left and right panels
- Use specific width class for right panel (w-[375px])

* Refactor chat form and AI greeting with flexible partials

- Extract message form to a reusable partial with dynamic context support
- Create flexible AI greeting partial for consistent welcome messages
- Simplify chat and sidebar views by leveraging new partials
- Add support for different form scenarios (chat, new chat, sidebar)
- Improve code modularity and reduce duplication

* Add chat clearing functionality with dynamic menu options

- Implement clear chat action in ChatsController
- Add clear chat route to support clearing messages
- Update AI sidebar with dropdown menu for chat actions
- Preserve system message when clearing chat
- Enhance chat interaction with new menu options

* Add frontmatter to project structure documentation

- Create initial frontmatter for structure.mdc file
- Include description and configuration options
- Prepare for potential dynamic documentation rendering

* Update general project rules with additional guidelines

- Add rule for using `Current.family` instead of `current_family`
- Include new guidelines for testing, API routes, and solution approach
- Expand project-specific rules for more consistent development practices

* Add OpenAI gem and AI-friendly data representations

- Add `ruby-openai` gem for AI integration
- Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement
- Include Promptable module in both models
- Add savings rate calculation method in IncomeStatement
- Prepare financial models for AI-powered insights and interactions

* Enhance AI Financial Assistant with Advanced Querying and Debugging Capabilities

- Implement comprehensive AI financial query system with function-based interactions
- Add detailed debug logging for AI responses and function calls
- Extend BalanceSheet and IncomeStatement models with AI-friendly methods
- Create robust error handling and fallback mechanisms for AI queries
- Update chat and message views to support debug mode and enhanced rendering
- Add AI query routes and initial test coverage for financial assistant

* Refactor AI sidebar and chat layout with improved structure and comments

- Remove inline AI chat from application layout
- Enhance AI sidebar with more semantic HTML structure
- Add descriptive comments to clarify different sections of chat view
- Improve flex layout and scrolling behavior in chat messages container
- Optimize message rendering with more explicit class names and structure

* Add Markdown rendering support for AI chat messages

- Implement `markdown` helper method in ApplicationHelper using Redcarpet
- Update message view to render AI messages with Markdown formatting
- Add comprehensive Markdown rendering options (tables, code blocks, links)
- Enhance AI Financial Assistant prompt to encourage Markdown usage
- Remove commented Markdown CSS in Tailwind application stylesheet

* Missing comma

* Enhance AI response processing with chat history context

* Improve AI debug logging with payload size limits and internal message flag

* Enhance AI chat interaction with improved thinking indicator and scrolling behavior

* Add AI consent and enable/disable functionality for AI chat

* Upgrade Biome and refactor JavaScript template literals

- Update @biomejs/biome to latest version with caret (^) notation
- Refactor AI query and chat controllers to use template literals
- Standardize npm scripts formatting in package.json

* Add beta testing usage note to AI consent modal

* Update test fixtures and configurations for AI chat functionality

- Add family association to chat fixtures and tests
- Set consistent password digest for test users
- Enable AI for test users
- Add OpenAI access token for test environment
- Update chat and user model tests to include family context

* Simplify data model and get tests passing

* Remove structure.mdc from version control

* Integrate AI chat styles into existing prose pattern

* Match Figma design spec, implement Turbo frames and actions for chats controller

* AI rules refresh

* Consolidate Stimulus controllers, thinking state, controllers, and views

* Naming, domain alignment

* Reset migrations

* Improve data model to support tool calls and message types

* Tool calling tests and fixtures

* Tool call implementation and test

* Get assistant test working again

* Test updates

* Process tool calls within provider

* Chat UI back to working state again

* Remove stale code

* Tests passing

* Update openai class naming to avoid conflicts

* Reconfigure test env

* Rebuild gemfile

* Fix naming conflicts for ChatResponse

* Message styles

* Use OpenAI conversation state management

* Assistant function base implementation

* Add back thinking messages, clean up error handling for chat

* Fix sync error when security price has bad data from provider

* Add balance sheet function to assistant

* Add better function calling error visibility

* Add income statement function

* Simplify and clean up "thinking" interactions with Turbo frames

* Remove stale data definitions from functions

* Ensure VCR fixtures working with latest code

* basic stream implementation

* Get streaming working

* Make AI sidebar wider when left sidebar is collapsed

* Get tests working with streaming responses

* Centralize provider error handling

* Provider data boundaries

---------

Co-authored-by: Josh Pigford <josh@joshpigford.com>
This commit is contained in:
Zach Gollwitzer 2025-03-28 13:08:22 -04:00 committed by GitHub
parent 8e6b81af77
commit 2f6b11c18f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
126 changed files with 3576 additions and 462 deletions

View file

@ -21,6 +21,10 @@ class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
find("h1", text: "Welcome back, #{user.first_name}")
end
def login_as(user)
sign_in(user)
end
def sign_out
find("#user-menu").click
click_button "Logout"

View file

@ -0,0 +1,52 @@
require "test_helper"
class ChatsControllerTest < ActionDispatch::IntegrationTest
setup do
@user = users(:family_admin)
@family = families(:dylan_family)
sign_in @user
end
test "cannot create a chat if AI is disabled" do
@user.update!(ai_enabled: false)
post chats_url, params: { chat: { content: "Hello", ai_model: "gpt-4o" } }
assert_response :forbidden
end
test "gets index" do
get chats_url
assert_response :success
end
test "creates chat" do
assert_difference("Chat.count") do
post chats_url, params: { chat: { content: "Hello", ai_model: "gpt-4o" } }
end
assert_redirected_to chat_path(Chat.order(created_at: :desc).first, thinking: true)
end
test "shows chat" do
get chat_url(chats(:one))
assert_response :success
end
test "destroys chat" do
assert_difference("Chat.count", -1) do
delete chat_url(chats(:one))
end
assert_redirected_to chats_url
end
test "should not allow access to other user's chats" do
other_user = users(:family_member)
other_chat = Chat.create!(user: other_user, title: "Other User's Chat")
get chat_url(other_chat)
assert_response :not_found
delete chat_url(other_chat)
assert_response :not_found
end
end

View file

@ -0,0 +1,22 @@
require "test_helper"
class MessagesControllerTest < ActionDispatch::IntegrationTest
setup do
sign_in @user = users(:family_admin)
@chat = @user.chats.first
end
test "can create a message" do
post chat_messages_url(@chat), params: { message: { content: "Hello", ai_model: "gpt-4o" } }
assert_redirected_to chat_path(@chat, thinking: true)
end
test "cannot create a message if AI is disabled" do
@user.update!(ai_enabled: false)
post chat_messages_url(@chat), params: { message: { content: "Hello", ai_model: "gpt-4o" } }
assert_response :forbidden
end
end

View file

@ -8,7 +8,7 @@ class Settings::HostingsControllerTest < ActionDispatch::IntegrationTest
sign_in users(:family_admin)
@provider = mock
Providers.stubs(:synth).returns(@provider)
Provider::Registry.stubs(:get_provider).with(:synth).returns(@provider)
@usage_response = provider_success_response(
OpenStruct.new(
used: 10,
@ -20,12 +20,12 @@ class Settings::HostingsControllerTest < ActionDispatch::IntegrationTest
end
test "cannot edit when self hosting is disabled" do
assert_raises(RuntimeError, "Settings not available on non-self-hosted instance") do
with_env_overrides SELF_HOSTED: "false" do
get settings_hosting_url
end
assert_response :forbidden
assert_raises(RuntimeError, "Settings not available on non-self-hosted instance") do
patch settings_hosting_url, params: { setting: { require_invite_for_signup: true } }
assert_response :forbidden
end
end
@ -40,8 +40,6 @@ class Settings::HostingsControllerTest < ActionDispatch::IntegrationTest
test "can update settings when self hosting is enabled" do
with_self_hosting do
assert_nil Setting.synth_api_key
patch settings_hosting_url, params: { setting: { synth_api_key: "1234567890" } }
assert_equal "1234567890", Setting.synth_api_key

7
test/fixtures/chats.yml vendored Normal file
View file

@ -0,0 +1,7 @@
one:
title: First Chat
user: family_admin
two:
title: Second Chat
user: family_member

43
test/fixtures/messages.yml vendored Normal file
View file

@ -0,0 +1,43 @@
chat1_developer:
type: DeveloperMessage
content: You are a personal finance assistant. Be concise and helpful.
chat: one
created_at: 2025-03-20 12:00:00
debug: false
chat1_developer_debug:
type: DeveloperMessage
content: An internal debug message
chat: one
created_at: 2025-03-20 12:00:02
debug: true
chat1_user:
type: UserMessage
content: Can you help me understand my spending habits?
chat: one
ai_model: gpt-4o
created_at: 2025-03-20 12:00:01
chat2_user:
type: UserMessage
content: Can you help me understand my spending habits?
ai_model: gpt-4o
chat: two
created_at: 2025-03-20 12:00:01
chat1_assistant_reasoning:
type: AssistantMessage
content: I'm thinking...
ai_model: gpt-4o
chat: one
created_at: 2025-03-20 12:01:00
reasoning: true
chat1_assistant_response:
type: AssistantMessage
content: Hello! I can help you understand your spending habits.
ai_model: gpt-4o
chat: one
created_at: 2025-03-20 12:02:00
reasoning: false

7
test/fixtures/tool_calls.yml vendored Normal file
View file

@ -0,0 +1,7 @@
one:
type: ToolCall::Function
function_name: get_user_info
provider_id: fc_12345xyz
provider_call_id: call_12345xyz
function_arguments: {}
message: chat1_assistant_response

View file

@ -3,34 +3,38 @@ empty:
first_name: User
last_name: One
email: user1@email.com
password_digest: <%= BCrypt::Password.create('password') %>
password_digest: $2a$12$7p8hMsoc0zSaC8eY9oewzelHbmCPdpPi.mGiyG4vdZwrXmGpRPoNK
onboarded_at: <%= 3.days.ago %>
ai_enabled: true
maybe_support_staff:
family: empty
first_name: Support
last_name: Admin
email: support@maybefinance.com
password_digest: <%= BCrypt::Password.create('password') %>
password_digest: $2a$12$7p8hMsoc0zSaC8eY9oewzelHbmCPdpPi.mGiyG4vdZwrXmGpRPoNK
role: super_admin
onboarded_at: <%= 3.days.ago %>
ai_enabled: true
family_admin:
family: dylan_family
first_name: Bob
last_name: Dylan
email: bob@bobdylan.com
password_digest: <%= BCrypt::Password.create('password') %>
password_digest: $2a$12$7p8hMsoc0zSaC8eY9oewzelHbmCPdpPi.mGiyG4vdZwrXmGpRPoNK
role: admin
onboarded_at: <%= 3.days.ago %>
ai_enabled: true
family_member:
family: dylan_family
first_name: Jakob
last_name: Dylan
email: jakobdylan@yahoo.com
password_digest: <%= BCrypt::Password.create('password') %>
password_digest: $2a$12$7p8hMsoc0zSaC8eY9oewzelHbmCPdpPi.mGiyG4vdZwrXmGpRPoNK
onboarded_at: <%= 3.days.ago %>
ai_enabled: true
new_email:
family: empty
@ -38,5 +42,6 @@ new_email:
last_name: User
email: user@example.com
unconfirmed_email: new@example.com
password_digest: <%= BCrypt::Password.create('password123') %>
onboarded_at: <%= Time.current %>
password_digest: $2a$12$7p8hMsoc0zSaC8eY9oewzelHbmCPdpPi.mGiyG4vdZwrXmGpRPoNK
onboarded_at: <%= Time.current %>
ai_enabled: true

View file

@ -11,11 +11,11 @@ module ExchangeRateProviderInterfaceTest
date: Date.parse("01.01.2024")
)
rate = response.data.rate
rate = response.data
assert_kind_of ExchangeRate, rate
assert_equal "USD", rate.from_currency
assert_equal "GBP", rate.to_currency
assert_equal "USD", rate.from
assert_equal "GBP", rate.to
assert_in_delta 0.78, rate.rate, 0.01
end
end
@ -25,7 +25,7 @@ module ExchangeRateProviderInterfaceTest
from: "USD", to: "GBP", start_date: Date.parse("01.01.2024"), end_date: Date.parse("31.07.2024")
)
assert 213, response.data.rates.count # 213 days between 01.01.2024 and 31.07.2024
assert_equal 213, response.data.count # 213 days between 01.01.2024 and 31.07.2024
end
end

View file

@ -0,0 +1,10 @@
require "test_helper"
module LLMInterfaceTest
extend ActiveSupport::Testing::Declarative
private
def vcr_key_prefix
@subject.class.name.demodulize.underscore
end
end

View file

@ -8,8 +8,9 @@ module SecurityProviderInterfaceTest
VCR.use_cassette("#{vcr_key_prefix}/security_price") do
response = @subject.fetch_security_price(aapl, date: Date.iso8601("2024-08-01"))
assert response.success?
assert response.data.price.present?
assert response.data.present?
end
end
@ -24,19 +25,18 @@ module SecurityProviderInterfaceTest
)
assert response.success?
assert 213, response.data.prices.count
assert_equal 147, response.data.count # Synth won't return prices on weekends / holidays, so less than total day count of 213
end
end
test "searches securities" do
VCR.use_cassette("#{vcr_key_prefix}/security_search") do
response = @subject.search_securities("AAPL", country_code: "US")
securities = response.data.securities
securities = response.data
assert securities.any?
security = securities.first
assert_kind_of Security, security
assert_equal "AAPL", security.ticker
assert_equal "AAPL", security.symbol
end
end
@ -47,10 +47,10 @@ module SecurityProviderInterfaceTest
response = @subject.fetch_security_info(aapl)
info = response.data
assert_equal "AAPL", info.ticker
assert_equal "AAPL", info.symbol
assert_equal "Apple Inc.", info.name
assert info.logo_url.present?
assert_equal "common stock", info.kind
assert info.logo_url.present?
assert info.description.present?
end
end

View file

@ -1,7 +0,0 @@
require "test_helper"
class EnrichDataJobTest < ActiveJob::TestCase
# test "the truth" do
# assert true
# end
end

View file

@ -1,7 +0,0 @@
require "test_helper"
class RevertImportJobTest < ActiveJob::TestCase
# test "the truth" do
# assert true
# end
end

View file

@ -1,7 +0,0 @@
require "test_helper"
class UserPurgeJobTest < ActiveJob::TestCase
# test "the truth" do
# assert true
# end
end

View file

@ -24,13 +24,11 @@ class Account::ConvertibleTest < ActiveSupport::TestCase
ExchangeRate.delete_all
provider_response = provider_success_response(
ExchangeRate::Provideable::FetchRatesData.new(
rates: [
ExchangeRate.new(from_currency: "EUR", to_currency: "USD", date: 2.days.ago.to_date, rate: 1.1),
ExchangeRate.new(from_currency: "EUR", to_currency: "USD", date: 1.day.ago.to_date, rate: 1.2),
ExchangeRate.new(from_currency: "EUR", to_currency: "USD", date: Date.current, rate: 1.3)
]
)
[
OpenStruct.new(from: "EUR", to: "USD", date: 2.days.ago.to_date, rate: 1.1),
OpenStruct.new(from: "EUR", to: "USD", date: 1.day.ago.to_date, rate: 1.2),
OpenStruct.new(from: "EUR", to: "USD", date: Date.current, rate: 1.3)
]
)
@provider.expects(:fetch_exchange_rates)

View file

@ -82,12 +82,6 @@ class Account::Holding::PortfolioCacheTest < ActiveSupport::TestCase
def expect_provider_prices(prices, start_date:, end_date: Date.current)
@provider.expects(:fetch_security_prices)
.with(@security, start_date: start_date, end_date: end_date)
.returns(
provider_success_response(
Security::Provideable::PricesData.new(
prices: prices
)
)
)
.returns(provider_success_response(prices))
end
end

View file

@ -0,0 +1,19 @@
require "test_helper"
class AssistantMessageTest < ActiveSupport::TestCase
setup do
@chat = chats(:one)
end
test "broadcasts append after creation" do
message = AssistantMessage.create!(chat: @chat, content: "Hello from assistant", ai_model: "gpt-4o")
message.update!(content: "updated")
streams = capture_turbo_stream_broadcasts(@chat)
assert_equal 2, streams.size
assert_equal "append", streams.first["action"]
assert_equal "messages", streams.first["target"]
assert_equal "update", streams.last["action"]
assert_equal "assistant_message_#{message.id}", streams.last["target"]
end
end

View file

@ -0,0 +1,86 @@
require "test_helper"
require "ostruct"
class AssistantTest < ActiveSupport::TestCase
include ProviderTestHelper
setup do
@chat = chats(:two)
@message = @chat.messages.create!(
type: "UserMessage",
content: "Help me with my finances",
ai_model: "gpt-4o"
)
@assistant = Assistant.for_chat(@chat)
@provider = mock
@assistant.expects(:get_model_provider).with("gpt-4o").returns(@provider)
end
test "responds to basic prompt" do
text_chunk = OpenStruct.new(type: "output_text", data: "Hello from assistant")
response_chunk = OpenStruct.new(
type: "response",
data: OpenStruct.new(
id: "1",
model: "gpt-4o",
messages: [
OpenStruct.new(
id: "1",
content: "Hello from assistant",
)
],
functions: []
)
)
@provider.expects(:chat_response).with do |message, **options|
options[:streamer].call(text_chunk)
options[:streamer].call(response_chunk)
true
end
assert_difference "AssistantMessage.count", 1 do
@assistant.respond_to(@message)
end
end
test "responds with tool function calls" do
function_request_chunk = OpenStruct.new(type: "function_request", data: "get_net_worth")
text_chunk = OpenStruct.new(type: "output_text", data: "Your net worth is $124,200")
response_chunk = OpenStruct.new(
type: "response",
data: OpenStruct.new(
id: "1",
model: "gpt-4o",
messages: [
OpenStruct.new(
id: "1",
content: "Your net worth is $124,200",
)
],
functions: [
OpenStruct.new(
id: "1",
call_id: "1",
name: "get_net_worth",
arguments: "{}",
result: "$124,200"
)
]
)
)
@provider.expects(:chat_response).with do |message, **options|
options[:streamer].call(function_request_chunk)
options[:streamer].call(text_chunk)
options[:streamer].call(response_chunk)
true
end
assert_difference "AssistantMessage.count", 1 do
@assistant.respond_to(@message)
message = @chat.messages.ordered.where(type: "AssistantMessage").last
assert_equal 1, message.tool_calls.size
end
end
end

31
test/models/chat_test.rb Normal file
View file

@ -0,0 +1,31 @@
require "test_helper"
class ChatTest < ActiveSupport::TestCase
setup do
@user = users(:family_admin)
@assistant = mock
end
test "user sees all messages in debug mode" do
chat = chats(:one)
with_env_overrides AI_DEBUG_MODE: "true" do
assert_equal chat.messages.count, chat.conversation_messages.count
end
end
test "user sees assistant and user messages in normal mode" do
chat = chats(:one)
assert_equal 3, chat.conversation_messages.count
end
test "creates with initial message" do
prompt = "Test prompt"
assert_difference "@user.chats.count", 1 do
chat = @user.chats.start!(prompt, model: "gpt-4o")
assert_equal 1, chat.messages.count
assert_equal 1, chat.messages.where(type: "UserMessage").count
end
end
end

View file

@ -0,0 +1,28 @@
require "test_helper"
class DeveloperMessageTest < ActiveSupport::TestCase
setup do
@chat = chats(:one)
end
test "does not broadcast" do
message = DeveloperMessage.create!(chat: @chat, content: "Some instructions")
message.update!(content: "updated")
assert_no_turbo_stream_broadcasts(@chat)
end
test "broadcasts if debug mode is enabled" do
with_env_overrides AI_DEBUG_MODE: "true" do
message = DeveloperMessage.create!(chat: @chat, content: "Some instructions")
message.update!(content: "updated")
streams = capture_turbo_stream_broadcasts(@chat)
assert_equal 2, streams.size
assert_equal "append", streams.first["action"]
assert_equal "messages", streams.first["target"]
assert_equal "update", streams.last["action"]
assert_equal "developer_message_#{message.id}", streams.last["target"]
end
end
end

View file

@ -26,13 +26,11 @@ class ExchangeRateTest < ActiveSupport::TestCase
ExchangeRate.delete_all
provider_response = provider_success_response(
ExchangeRate::Provideable::FetchRateData.new(
rate: ExchangeRate.new(
from_currency: "USD",
to_currency: "EUR",
date: Date.current,
rate: 1.2
)
OpenStruct.new(
from: "USD",
to: "EUR",
date: Date.current,
rate: 1.2
)
)
@ -47,13 +45,11 @@ class ExchangeRateTest < ActiveSupport::TestCase
ExchangeRate.delete_all
provider_response = provider_success_response(
ExchangeRate::Provideable::FetchRateData.new(
rate: ExchangeRate.new(
from_currency: "USD",
to_currency: "EUR",
date: Date.current,
rate: 1.2
)
OpenStruct.new(
from: "USD",
to: "EUR",
date: Date.current,
rate: 1.2
)
)
@ -65,7 +61,7 @@ class ExchangeRateTest < ActiveSupport::TestCase
end
test "returns nil on provider error" do
provider_response = provider_error_response(Provider::ProviderError.new("Test error"))
provider_response = provider_error_response(StandardError.new("Test error"))
@provider.expects(:fetch_exchange_rate).returns(provider_response)
@ -77,15 +73,11 @@ class ExchangeRateTest < ActiveSupport::TestCase
ExchangeRate.create!(date: 1.day.ago.to_date, from_currency: "USD", to_currency: "EUR", rate: 0.9)
provider_response = provider_success_response(
ExchangeRate::Provideable::FetchRatesData.new(
rates: [
ExchangeRate.new(from_currency: "USD", to_currency: "EUR", date: Date.current, rate: 1.3),
ExchangeRate.new(from_currency: "USD", to_currency: "EUR", date: 1.day.ago.to_date, rate: 1.4),
ExchangeRate.new(from_currency: "USD", to_currency: "EUR", date: 2.days.ago.to_date, rate: 1.5)
]
)
)
provider_response = provider_success_response([
OpenStruct.new(from: "USD", to: "EUR", date: Date.current, rate: 1.3),
OpenStruct.new(from: "USD", to: "EUR", date: 1.day.ago.to_date, rate: 1.4),
OpenStruct.new(from: "USD", to: "EUR", date: 2.days.ago.to_date, rate: 1.5)
])
@provider.expects(:fetch_exchange_rates)
.with(from: "USD", to: "EUR", start_date: 2.days.ago.to_date, end_date: Date.current)

View file

@ -0,0 +1,136 @@
require "test_helper"
class Provider::OpenaiTest < ActiveSupport::TestCase
include LLMInterfaceTest
setup do
@subject = @openai = Provider::Openai.new(ENV.fetch("OPENAI_ACCESS_TOKEN", "test-openai-token"))
@subject_model = "gpt-4o"
@chat = chats(:two)
end
test "openai errors are automatically raised" do
VCR.use_cassette("openai/chat/error") do
response = @openai.chat_response(UserMessage.new(
chat: @chat,
content: "Error test",
ai_model: "invalid-model-that-will-trigger-api-error"
))
assert_not response.success?
assert_kind_of Provider::Openai::Error, response.error
end
end
test "basic chat response" do
VCR.use_cassette("openai/chat/basic_response") do
message = @chat.messages.create!(
type: "UserMessage",
content: "This is a chat test. If it's working, respond with a single word: Yes",
ai_model: @subject_model
)
response = @subject.chat_response(message)
assert response.success?
assert_equal 1, response.data.messages.size
assert_includes response.data.messages.first.content, "Yes"
end
end
test "streams basic chat response" do
VCR.use_cassette("openai/chat/basic_response") do
collected_chunks = []
mock_streamer = proc do |chunk|
collected_chunks << chunk
end
message = @chat.messages.create!(
type: "UserMessage",
content: "This is a chat test. If it's working, respond with a single word: Yes",
ai_model: @subject_model
)
@subject.chat_response(message, streamer: mock_streamer)
tool_call_chunks = collected_chunks.select { |chunk| chunk.type == "function_request" }
text_chunks = collected_chunks.select { |chunk| chunk.type == "output_text" }
response_chunks = collected_chunks.select { |chunk| chunk.type == "response" }
assert_equal 1, text_chunks.size
assert_equal 1, response_chunks.size
assert_equal 0, tool_call_chunks.size
assert_equal "Yes", text_chunks.first.data
assert_equal "Yes", response_chunks.first.data.messages.first.content
end
end
test "chat response with tool calls" do
VCR.use_cassette("openai/chat/tool_calls") do
response = @subject.chat_response(
tool_call_message,
instructions: "Use the tools available to you to answer the user's question.",
available_functions: [ PredictableToolFunction.new(@chat) ]
)
assert response.success?
assert_equal 1, response.data.functions.size
assert_equal 1, response.data.messages.size
assert_includes response.data.messages.first.content, PredictableToolFunction.expected_test_result
end
end
test "streams chat response with tool calls" do
VCR.use_cassette("openai/chat/tool_calls") do
collected_chunks = []
mock_streamer = proc do |chunk|
collected_chunks << chunk
end
@subject.chat_response(
tool_call_message,
instructions: "Use the tools available to you to answer the user's question.",
available_functions: [ PredictableToolFunction.new(@chat) ],
streamer: mock_streamer
)
text_chunks = collected_chunks.select { |chunk| chunk.type == "output_text" }
text_chunks = collected_chunks.select { |chunk| chunk.type == "output_text" }
tool_call_chunks = collected_chunks.select { |chunk| chunk.type == "function_request" }
response_chunks = collected_chunks.select { |chunk| chunk.type == "response" }
assert_equal 1, tool_call_chunks.count
assert text_chunks.count >= 1
assert_equal 1, response_chunks.count
assert_includes response_chunks.first.data.messages.first.content, PredictableToolFunction.expected_test_result
end
end
private
def tool_call_message
UserMessage.new(chat: @chat, content: "What is my net worth?", ai_model: @subject_model)
end
class PredictableToolFunction < Assistant::Function
class << self
def expected_test_result
"$124,200"
end
def name
"get_net_worth"
end
def description
"Gets user net worth data"
end
end
def call(params = {})
self.class.expected_test_result
end
end
end

View file

@ -1,11 +1,11 @@
require "test_helper"
class ProvidersTest < ActiveSupport::TestCase
class Provider::RegistryTest < ActiveSupport::TestCase
test "synth configured with ENV" do
Setting.stubs(:synth_api_key).returns(nil)
with_env_overrides SYNTH_API_KEY: "123" do
assert_instance_of Provider::Synth, Providers.synth
assert_instance_of Provider::Synth, Provider::Registry.get_provider(:synth)
end
end
@ -13,7 +13,7 @@ class ProvidersTest < ActiveSupport::TestCase
Setting.stubs(:synth_api_key).returns("123")
with_env_overrides SYNTH_API_KEY: nil do
assert_instance_of Provider::Synth, Providers.synth
assert_instance_of Provider::Synth, Provider::Registry.get_provider(:synth)
end
end
@ -21,7 +21,7 @@ class ProvidersTest < ActiveSupport::TestCase
Setting.stubs(:synth_api_key).returns(nil)
with_env_overrides SYNTH_API_KEY: nil do
assert_nil Providers.synth
assert_nil Provider::Registry.get_provider(:synth)
end
end
end

View file

@ -3,7 +3,7 @@ require "ostruct"
class TestProvider < Provider
def fetch_data
provider_response(retries: 3) do
with_provider_response(retries: 3) do
client.get("/test")
end
end
@ -51,7 +51,7 @@ class ProviderTest < ActiveSupport::TestCase
client.expects(:get)
.with("/test")
.returns(Provider::ProviderResponse.new(success?: true, data: "success", error: nil))
.returns(Provider::Response.new(success?: true, data: "success", error: nil))
.in_sequence(sequence)
response = @provider.fetch_data

View file

@ -40,11 +40,11 @@ class Security::PriceTest < ActiveSupport::TestCase
security = securities(:aapl)
Security::Price.delete_all # Clear any existing prices
provider_response = provider_error_response(Provider::ProviderError.new("Test error"))
with_provider_response = provider_error_response(StandardError.new("Test error"))
@provider.expects(:fetch_security_price)
.with(security, date: Date.current)
.returns(provider_response)
.returns(with_provider_response)
assert_not @security.find_or_fetch_price(date: Date.current)
end
@ -72,12 +72,12 @@ class Security::PriceTest < ActiveSupport::TestCase
def expect_provider_price(security:, price:, date:)
@provider.expects(:fetch_security_price)
.with(security, date: date)
.returns(provider_success_response(Security::Provideable::PriceData.new(price: price)))
.returns(provider_success_response(price))
end
def expect_provider_prices(security:, prices:, start_date:, end_date:)
@provider.expects(:fetch_security_prices)
.with(security, start_date: start_date, end_date: end_date)
.returns(provider_success_response(Security::Provideable::PricesData.new(prices: prices)))
.returns(provider_success_response(prices))
end
end

View file

@ -0,0 +1,21 @@
require "test_helper"
class UserMessageTest < ActiveSupport::TestCase
setup do
@chat = chats(:one)
end
test "requests assistant response after creation" do
@chat.expects(:ask_assistant_later).once
message = UserMessage.create!(chat: @chat, content: "Hello from user", ai_model: "gpt-4o")
message.update!(content: "updated")
streams = capture_turbo_stream_broadcasts(@chat)
assert_equal 2, streams.size
assert_equal "append", streams.first["action"]
assert_equal "messages", streams.first["target"]
assert_equal "update", streams.last["action"]
assert_equal "user_message_#{message.id}", streams.last["target"]
end
end

View file

@ -1,6 +1,6 @@
module ProviderTestHelper
def provider_success_response(data)
Provider::ProviderResponse.new(
Provider::Response.new(
success?: true,
data: data,
error: nil
@ -8,7 +8,7 @@ module ProviderTestHelper
end
def provider_error_response(error)
Provider::ProviderResponse.new(
Provider::Response.new(
success?: false,
data: nil,
error: error

66
test/system/chats_test.rb Normal file
View file

@ -0,0 +1,66 @@
require "application_system_test_case"
class ChatsTest < ApplicationSystemTestCase
setup do
@user = users(:family_admin)
login_as(@user)
end
test "sidebar shows consent if ai is disabled for user" do
@user.update!(ai_enabled: false)
visit root_path
within "#chat-container" do
assert_selector "h3", text: "Enable Personal Finance AI"
end
end
test "sidebar shows index when enabled and chats are empty" do
@user.update!(ai_enabled: true)
@user.chats.destroy_all
visit root_url
within "#chat-container" do
assert_selector "h1", text: "Chats"
end
end
test "sidebar shows last viewed chat" do
@user.update!(ai_enabled: true)
click_on @user.chats.first.title
# Page refresh
visit root_url
# After page refresh, we're still on the last chat we were viewing
within "#chat-container" do
assert_selector "h1", text: @user.chats.first.title
end
end
test "create chat and navigate chats sidebar" do
@user.chats.destroy_all
visit root_url
Chat.any_instance.expects(:ask_assistant_later).once
within "#chat-form" do
fill_in "chat[content]", with: "Can you help with my finances?"
find("button[type='submit']").click
end
assert_text "Can you help with my finances?"
find("#chat-nav-back").click
assert_selector "h1", text: "Chats"
click_on @user.chats.reload.first.title
assert_text "Can you help with my finances?"
end
end

View file

@ -33,7 +33,7 @@ class SettingsTest < ApplicationSystemTestCase
test "can update self hosting settings" do
Rails.application.config.app_mode.stubs(:self_hosted?).returns(true)
Providers.stubs(:synth).returns(nil)
Provider::Registry.stubs(:get_provider).with(:synth).returns(nil)
open_settings_from_sidebar
assert_selector "li", text: "Self hosting"
click_link "Self hosting"

View file

@ -24,6 +24,8 @@ VCR.configure do |config|
config.ignore_localhost = true
config.default_cassette_options = { erb: true }
config.filter_sensitive_data("<SYNTH_API_KEY>") { ENV["SYNTH_API_KEY"] }
config.filter_sensitive_data("<OPENAI_ACCESS_TOKEN>") { ENV["OPENAI_ACCESS_TOKEN"] }
config.filter_sensitive_data("<OPENAI_ORGANIZATION_ID>") { ENV["OPENAI_ORGANIZATION_ID"] }
end
module ActiveSupport

View file

@ -0,0 +1,92 @@
---
http_interactions:
- request:
method: post
uri: https://api.openai.com/v1/responses
body:
encoding: UTF-8
string: '{"model":"gpt-4o","input":[{"role":"user","content":"This is a chat
test. If it''s working, respond with a single word: Yes"}],"instructions":null,"tools":[],"previous_response_id":null,"stream":true}'
headers:
Content-Type:
- application/json
Authorization:
- Bearer <OPENAI_ACCESS_TOKEN>
Accept-Encoding:
- gzip;q=1.0,deflate;q=0.6,identity;q=0.3
Accept:
- "*/*"
User-Agent:
- Ruby
response:
status:
code: 200
message: OK
headers:
Date:
- Wed, 26 Mar 2025 21:27:38 GMT
Content-Type:
- text/event-stream; charset=utf-8
Transfer-Encoding:
- chunked
Connection:
- keep-alive
Openai-Version:
- '2020-10-01'
Openai-Organization:
- "<OPENAI_ORGANIZATION_ID>"
X-Request-Id:
- req_8fce503a4c5be145dda20867925b1622
Openai-Processing-Ms:
- '103'
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Cf-Cache-Status:
- DYNAMIC
Set-Cookie:
- __cf_bm=o5kysxtwKJs3TPoOquM0X4MkyLIaylWhRd8LhagxXck-1743024458-1.0.1.1-ol6ndVCx6dHLGnc9.YmKYwgfOBqhSZSBpIHg4STCi4OBhrgt70FYPmMptrYDvg.SoFuS5RAS_pGiNNWXHspHio3gTfJ87vIdT936GYHIDrc;
path=/; expires=Wed, 26-Mar-25 21:57:38 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=Iqk8pY6uwz2lLhdKt0PwWTdtYQUqqvS6xmP9DMVko2A-1743024458829-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
X-Content-Type-Options:
- nosniff
Server:
- cloudflare
Cf-Ray:
- 9269bbb21b1ecf43-CMH
Alt-Svc:
- h3=":443"; ma=86400
body:
encoding: UTF-8
string: |+
event: response.created
data: {"type":"response.created","response":{"id":"resp_67e4714ab0148192ae2cc4303794d6fc0c1a792abcdc2819","object":"response","created_at":1743024458,"status":"in_progress","error":null,"incomplete_details":null,"instructions":null,"max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.in_progress
data: {"type":"response.in_progress","response":{"id":"resp_67e4714ab0148192ae2cc4303794d6fc0c1a792abcdc2819","object":"response","created_at":1743024458,"status":"in_progress","error":null,"incomplete_details":null,"instructions":null,"max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.output_item.added
data: {"type":"response.output_item.added","output_index":0,"item":{"type":"message","id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","status":"in_progress","role":"assistant","content":[]}}
event: response.content_part.added
data: {"type":"response.content_part.added","item_id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","output_index":0,"content_index":0,"part":{"type":"output_text","text":"","annotations":[]}}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","output_index":0,"content_index":0,"delta":"Yes"}
event: response.output_text.done
data: {"type":"response.output_text.done","item_id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","output_index":0,"content_index":0,"text":"Yes"}
event: response.content_part.done
data: {"type":"response.content_part.done","item_id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","output_index":0,"content_index":0,"part":{"type":"output_text","text":"Yes","annotations":[]}}
event: response.output_item.done
data: {"type":"response.output_item.done","output_index":0,"item":{"type":"message","id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Yes","annotations":[]}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp_67e4714ab0148192ae2cc4303794d6fc0c1a792abcdc2819","object":"response","created_at":1743024458,"status":"completed","error":null,"incomplete_details":null,"instructions":null,"max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[{"type":"message","id":"msg_67e4714b1f8c8192b9b16febe8be86550c1a792abcdc2819","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Yes","annotations":[]}]}],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":43,"input_tokens_details":{"cached_tokens":0},"output_tokens":2,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":45},"user":null,"metadata":{}}}
recorded_at: Wed, 26 Mar 2025 21:27:39 GMT
recorded_with: VCR 6.3.1
...

View file

@ -0,0 +1,72 @@
---
http_interactions:
- request:
method: post
uri: https://api.openai.com/v1/responses
body:
encoding: UTF-8
string: '{"model":"invalid-model-that-will-trigger-api-error","input":[{"role":"user","content":"Error
test"}],"instructions":null,"tools":[],"previous_response_id":null,"stream":true}'
headers:
Content-Type:
- application/json
Authorization:
- Bearer <OPENAI_ACCESS_TOKEN>
Accept-Encoding:
- gzip;q=1.0,deflate;q=0.6,identity;q=0.3
Accept:
- "*/*"
User-Agent:
- Ruby
response:
status:
code: 400
message: Bad Request
headers:
Date:
- Wed, 26 Mar 2025 21:27:19 GMT
Content-Type:
- application/json
Content-Length:
- '207'
Connection:
- keep-alive
Openai-Version:
- '2020-10-01'
Openai-Organization:
- "<OPENAI_ORGANIZATION_ID>"
X-Request-Id:
- req_2b86e02f664e790dfa475f111402b722
Openai-Processing-Ms:
- '146'
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Cf-Cache-Status:
- DYNAMIC
Set-Cookie:
- __cf_bm=gAU0gS_ZQBfQmFkc_jKM73dhkNISbBY9FlQjGnZ6CfU-1743024439-1.0.1.1-bWRoC737.SOJPZrP90wTJLVmelTpxFqIsrunq2Lqgy4J3VvLtYBEBrqY0v4d94F5fMcm0Ju.TfQi0etmvqZtUSMRn6rvkMLmXexRcxP.1jE;
path=/; expires=Wed, 26-Mar-25 21:57:19 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=XnxX4KU80himuKAUavZYtkQasOjXJDJD.QLyMrfBSUU-1743024439792-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
X-Content-Type-Options:
- nosniff
Server:
- cloudflare
Cf-Ray:
- 9269bb3b2c14cf74-CMH
Alt-Svc:
- h3=":443"; ma=86400
body:
encoding: UTF-8
string: |-
{
"error": {
"message": "The requested model 'invalid-model-that-will-trigger-api-error' does not exist.",
"type": "invalid_request_error",
"param": "model",
"code": "model_not_found"
}
}
recorded_at: Wed, 26 Mar 2025 21:27:19 GMT
recorded_with: VCR 6.3.1

View file

@ -0,0 +1,201 @@
---
http_interactions:
- request:
method: post
uri: https://api.openai.com/v1/responses
body:
encoding: UTF-8
string: '{"model":"gpt-4o","input":[{"role":"user","content":"What is my net
worth?"}],"instructions":"Use the tools available to you to answer the user''s
question.","tools":[{"type":"function","name":"get_net_worth","description":"Gets
user net worth data","parameters":{"type":"object","properties":{},"required":[],"additionalProperties":false},"strict":true}],"previous_response_id":null,"stream":true}'
headers:
Content-Type:
- application/json
Authorization:
- Bearer <OPENAI_ACCESS_TOKEN>
Accept-Encoding:
- gzip;q=1.0,deflate;q=0.6,identity;q=0.3
Accept:
- "*/*"
User-Agent:
- Ruby
response:
status:
code: 200
message: OK
headers:
Date:
- Wed, 26 Mar 2025 21:22:09 GMT
Content-Type:
- text/event-stream; charset=utf-8
Transfer-Encoding:
- chunked
Connection:
- keep-alive
Openai-Version:
- '2020-10-01'
Openai-Organization:
- "<OPENAI_ORGANIZATION_ID>"
X-Request-Id:
- req_4f04cffbab6051b3ac301038e3796092
Openai-Processing-Ms:
- '114'
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Cf-Cache-Status:
- DYNAMIC
Set-Cookie:
- __cf_bm=F5haUlL1HA1srjwZugBxG6XWbGg.NyQBnJTTirKs5KI-1743024129-1.0.1.1-D842I3sPgDgH_KXyroq6uVivEnbWvm9WJF.L8a11GgUcULXjhweLHs0mXe6MWruf.FJe.lZj.KmX0tCqqdpKIt5JvlbHXt5D_9svedktlZY;
path=/; expires=Wed, 26-Mar-25 21:52:09 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=MmuRzsy8ebDMe6ibCEwtGp2RzcntpAmdvDlhIZtlY1s-1743024129721-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
X-Content-Type-Options:
- nosniff
Server:
- cloudflare
Cf-Ray:
- 9269b3a97f370002-ORD
Alt-Svc:
- h3=":443"; ma=86400
body:
encoding: UTF-8
string: |+
event: response.created
data: {"type":"response.created","response":{"id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","object":"response","created_at":1743024129,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[{"type":"function","description":"Gets user net worth data","name":"get_net_worth","parameters":{"type":"object","properties":{},"required":[],"additionalProperties":false},"strict":true}],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.in_progress
data: {"type":"response.in_progress","response":{"id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","object":"response","created_at":1743024129,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[{"type":"function","description":"Gets user net worth data","name":"get_net_worth","parameters":{"type":"object","properties":{},"required":[],"additionalProperties":false},"strict":true}],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.output_item.added
data: {"type":"response.output_item.added","output_index":0,"item":{"type":"function_call","id":"fc_67e4700222008192b3a26ce30fe7ad02069d9116026394b6","call_id":"call_FtvrJsTMg7he0mTeThIqktyL","name":"get_net_worth","arguments":"","status":"in_progress"}}
event: response.function_call_arguments.delta
data: {"type":"response.function_call_arguments.delta","item_id":"fc_67e4700222008192b3a26ce30fe7ad02069d9116026394b6","output_index":0,"delta":"{}"}
event: response.function_call_arguments.done
data: {"type":"response.function_call_arguments.done","item_id":"fc_67e4700222008192b3a26ce30fe7ad02069d9116026394b6","output_index":0,"arguments":"{}"}
event: response.output_item.done
data: {"type":"response.output_item.done","output_index":0,"item":{"type":"function_call","id":"fc_67e4700222008192b3a26ce30fe7ad02069d9116026394b6","call_id":"call_FtvrJsTMg7he0mTeThIqktyL","name":"get_net_worth","arguments":"{}","status":"completed"}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","object":"response","created_at":1743024129,"status":"completed","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[{"type":"function_call","id":"fc_67e4700222008192b3a26ce30fe7ad02069d9116026394b6","call_id":"call_FtvrJsTMg7he0mTeThIqktyL","name":"get_net_worth","arguments":"{}","status":"completed"}],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[{"type":"function","description":"Gets user net worth data","name":"get_net_worth","parameters":{"type":"object","properties":{},"required":[],"additionalProperties":false},"strict":true}],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":271,"input_tokens_details":{"cached_tokens":0},"output_tokens":13,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":284},"user":null,"metadata":{}}}
recorded_at: Wed, 26 Mar 2025 21:22:10 GMT
- request:
method: post
uri: https://api.openai.com/v1/responses
body:
encoding: UTF-8
string: '{"model":"gpt-4o","input":[{"role":"user","content":"What is my net
worth?"},{"type":"function_call_output","call_id":"call_FtvrJsTMg7he0mTeThIqktyL","output":"\"$124,200\""}],"instructions":"Use
the tools available to you to answer the user''s question.","tools":[],"previous_response_id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","stream":true}'
headers:
Content-Type:
- application/json
Authorization:
- Bearer <OPENAI_ACCESS_TOKEN>
Accept-Encoding:
- gzip;q=1.0,deflate;q=0.6,identity;q=0.3
Accept:
- "*/*"
User-Agent:
- Ruby
response:
status:
code: 200
message: OK
headers:
Date:
- Wed, 26 Mar 2025 21:22:10 GMT
Content-Type:
- text/event-stream; charset=utf-8
Transfer-Encoding:
- chunked
Connection:
- keep-alive
Openai-Version:
- '2020-10-01'
Openai-Organization:
- "<OPENAI_ORGANIZATION_ID>"
X-Request-Id:
- req_792bf572fac53f7e139b29d462933d8f
Openai-Processing-Ms:
- '148'
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Cf-Cache-Status:
- DYNAMIC
Set-Cookie:
- __cf_bm=HHguTnSUQFt9KezJAQCrQF_OHn8ZH1C4xDjXRgexdzM-1743024130-1.0.1.1-ZhqxuASVfISfGQbvvKSNy_OQiUfkeIPN2DZhors0K4cl_BzE_P5u9kbc1HkgwyW1A_6GNAenh8Fr9AkoJ0zSakdg5Dr9AU.lu5nr7adQ_60;
path=/; expires=Wed, 26-Mar-25 21:52:10 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=hX9Y33ruiC9mhYzrOoxyOh23Gy.MfQa54h9l5CllWlI-1743024130948-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
X-Content-Type-Options:
- nosniff
Server:
- cloudflare
Cf-Ray:
- 9269b3b0da83cf67-CMH
Alt-Svc:
- h3=":443"; ma=86400
body:
encoding: UTF-8
string: |+
event: response.created
data: {"type":"response.created","response":{"id":"resp_67e47002c5b48192a8202d45c6a929f8069d9116026394b6","object":"response","created_at":1743024130,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.in_progress
data: {"type":"response.in_progress","response":{"id":"resp_67e47002c5b48192a8202d45c6a929f8069d9116026394b6","object":"response","created_at":1743024130,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[],"parallel_tool_calls":true,"previous_response_id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.output_item.added
data: {"type":"response.output_item.added","output_index":0,"item":{"type":"message","id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","status":"in_progress","role":"assistant","content":[]}}
event: response.content_part.added
data: {"type":"response.content_part.added","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"part":{"type":"output_text","text":"","annotations":[]}}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":"Your"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":" net"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":" worth"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":" is"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":" $"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":"124"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":","}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":"200"}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"delta":"."}
event: response.output_text.done
data: {"type":"response.output_text.done","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"text":"Your net worth is $124,200."}
event: response.content_part.done
data: {"type":"response.content_part.done","item_id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","output_index":0,"content_index":0,"part":{"type":"output_text","text":"Your net worth is $124,200.","annotations":[]}}
event: response.output_item.done
data: {"type":"response.output_item.done","output_index":0,"item":{"type":"message","id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Your net worth is $124,200.","annotations":[]}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp_67e47002c5b48192a8202d45c6a929f8069d9116026394b6","object":"response","created_at":1743024130,"status":"completed","error":null,"incomplete_details":null,"instructions":"Use the tools available to you to answer the user's question.","max_output_tokens":null,"model":"gpt-4o-2024-08-06","output":[{"type":"message","id":"msg_67e47003483c819290ae392b826c4910069d9116026394b6","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Your net worth is $124,200.","annotations":[]}]}],"parallel_tool_calls":true,"previous_response_id":"resp_67e4700196288192b27a4effc08dc47f069d9116026394b6","reasoning":{"effort":null,"generate_summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":85,"input_tokens_details":{"cached_tokens":0},"output_tokens":10,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":95},"user":null,"metadata":{}}}
recorded_at: Wed, 26 Mar 2025 21:22:11 GMT
recorded_with: VCR 6.3.1
...