1
0
Fork 0
mirror of https://github.com/maybe-finance/maybe.git synced 2025-07-18 20:59:39 +02:00
Maybe/app/models/provider/openai/chat_response_processor.rb
Zach Gollwitzer 2f6b11c18f
Personal finance AI (v1) (#2022)
* AI sidebar

* Add chat and message models with associations

* Implement AI chat functionality with sidebar and messaging system

- Add chat and messages controllers
- Create chat and message views
- Implement chat-related routes
- Add message broadcasting and user interactions
- Update application layout to support chat sidebar
- Enhance user model with initials method

* Refactor AI sidebar with enhanced chat menu and interactions

- Update sidebar layout with dynamic width and improved responsiveness
- Add new chat menu Stimulus controller for toggling between chat and chat list views
- Improve chat list display with recent chats and empty state
- Extract AI avatar to a partial for reusability
- Enhance message display and interaction styling
- Add more contextual buttons and interaction hints

* Improve chat scroll behavior and message styling

- Refactor chat scroll functionality with Stimulus controller
- Optimize message scrolling in chat views
- Update message styling for better visual hierarchy
- Enhance chat container layout with flex and auto-scroll
- Simplify message rendering across different chat views

* Extract AI avatar to a shared partial for consistent styling

- Refactor AI avatar rendering across chat views
- Replace hardcoded avatar markup with a reusable partial
- Simplify avatar display in chats and messages views

* Update sidebar controller to handle right panel width dynamically

- Add conditional width class for right sidebar panel
- Ensure consistent sidebar toggle behavior for both left and right panels
- Use specific width class for right panel (w-[375px])

* Refactor chat form and AI greeting with flexible partials

- Extract message form to a reusable partial with dynamic context support
- Create flexible AI greeting partial for consistent welcome messages
- Simplify chat and sidebar views by leveraging new partials
- Add support for different form scenarios (chat, new chat, sidebar)
- Improve code modularity and reduce duplication

* Add chat clearing functionality with dynamic menu options

- Implement clear chat action in ChatsController
- Add clear chat route to support clearing messages
- Update AI sidebar with dropdown menu for chat actions
- Preserve system message when clearing chat
- Enhance chat interaction with new menu options

* Add frontmatter to project structure documentation

- Create initial frontmatter for structure.mdc file
- Include description and configuration options
- Prepare for potential dynamic documentation rendering

* Update general project rules with additional guidelines

- Add rule for using `Current.family` instead of `current_family`
- Include new guidelines for testing, API routes, and solution approach
- Expand project-specific rules for more consistent development practices

* Add OpenAI gem and AI-friendly data representations

- Add `ruby-openai` gem for AI integration
- Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement
- Include Promptable module in both models
- Add savings rate calculation method in IncomeStatement
- Prepare financial models for AI-powered insights and interactions

* Enhance AI Financial Assistant with Advanced Querying and Debugging Capabilities

- Implement comprehensive AI financial query system with function-based interactions
- Add detailed debug logging for AI responses and function calls
- Extend BalanceSheet and IncomeStatement models with AI-friendly methods
- Create robust error handling and fallback mechanisms for AI queries
- Update chat and message views to support debug mode and enhanced rendering
- Add AI query routes and initial test coverage for financial assistant

* Refactor AI sidebar and chat layout with improved structure and comments

- Remove inline AI chat from application layout
- Enhance AI sidebar with more semantic HTML structure
- Add descriptive comments to clarify different sections of chat view
- Improve flex layout and scrolling behavior in chat messages container
- Optimize message rendering with more explicit class names and structure

* Add Markdown rendering support for AI chat messages

- Implement `markdown` helper method in ApplicationHelper using Redcarpet
- Update message view to render AI messages with Markdown formatting
- Add comprehensive Markdown rendering options (tables, code blocks, links)
- Enhance AI Financial Assistant prompt to encourage Markdown usage
- Remove commented Markdown CSS in Tailwind application stylesheet

* Missing comma

* Enhance AI response processing with chat history context

* Improve AI debug logging with payload size limits and internal message flag

* Enhance AI chat interaction with improved thinking indicator and scrolling behavior

* Add AI consent and enable/disable functionality for AI chat

* Upgrade Biome and refactor JavaScript template literals

- Update @biomejs/biome to latest version with caret (^) notation
- Refactor AI query and chat controllers to use template literals
- Standardize npm scripts formatting in package.json

* Add beta testing usage note to AI consent modal

* Update test fixtures and configurations for AI chat functionality

- Add family association to chat fixtures and tests
- Set consistent password digest for test users
- Enable AI for test users
- Add OpenAI access token for test environment
- Update chat and user model tests to include family context

* Simplify data model and get tests passing

* Remove structure.mdc from version control

* Integrate AI chat styles into existing prose pattern

* Match Figma design spec, implement Turbo frames and actions for chats controller

* AI rules refresh

* Consolidate Stimulus controllers, thinking state, controllers, and views

* Naming, domain alignment

* Reset migrations

* Improve data model to support tool calls and message types

* Tool calling tests and fixtures

* Tool call implementation and test

* Get assistant test working again

* Test updates

* Process tool calls within provider

* Chat UI back to working state again

* Remove stale code

* Tests passing

* Update openai class naming to avoid conflicts

* Reconfigure test env

* Rebuild gemfile

* Fix naming conflicts for ChatResponse

* Message styles

* Use OpenAI conversation state management

* Assistant function base implementation

* Add back thinking messages, clean up error handling for chat

* Fix sync error when security price has bad data from provider

* Add balance sheet function to assistant

* Add better function calling error visibility

* Add income statement function

* Simplify and clean up "thinking" interactions with Turbo frames

* Remove stale data definitions from functions

* Ensure VCR fixtures working with latest code

* basic stream implementation

* Get streaming working

* Make AI sidebar wider when left sidebar is collapsed

* Get tests working with streaming responses

* Centralize provider error handling

* Provider data boundaries

---------

Co-authored-by: Josh Pigford <josh@joshpigford.com>
2025-03-28 13:08:22 -04:00

188 lines
5.4 KiB
Ruby

class Provider::Openai::ChatResponseProcessor
def initialize(message:, client:, instructions: nil, available_functions: [], streamer: nil)
@client = client
@message = message
@instructions = instructions
@available_functions = available_functions
@streamer = streamer
end
def process
first_response = fetch_response(previous_response_id: previous_openai_response_id)
if first_response.functions.empty?
if streamer.present?
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "response", data: first_response))
end
return first_response
end
executed_functions = execute_pending_functions(first_response.functions)
follow_up_response = fetch_response(
executed_functions: executed_functions,
previous_response_id: first_response.id
)
if streamer.present?
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "response", data: follow_up_response))
end
follow_up_response
end
private
attr_reader :client, :message, :instructions, :available_functions, :streamer
PendingFunction = Data.define(:id, :call_id, :name, :arguments)
def fetch_response(executed_functions: [], previous_response_id: nil)
function_results = executed_functions.map do |executed_function|
{
type: "function_call_output",
call_id: executed_function.call_id,
output: executed_function.result.to_json
}
end
prepared_input = input + function_results
# No need to pass tools for follow-up messages that provide function results
prepared_tools = executed_functions.empty? ? tools : []
raw_response = nil
internal_streamer = proc do |chunk|
type = chunk.dig("type")
if streamer.present?
case type
when "response.output_text.delta", "response.refusal.delta"
# We don't distinguish between text and refusal yet, so stream both the same
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "output_text", data: chunk.dig("delta")))
when "response.function_call_arguments.done"
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "function_request", data: chunk.dig("arguments")))
end
end
if type == "response.completed"
raw_response = chunk.dig("response")
end
end
client.responses.create(parameters: {
model: model,
input: prepared_input,
instructions: instructions,
tools: prepared_tools,
previous_response_id: previous_response_id,
stream: internal_streamer
})
if raw_response.dig("status") == "failed" || raw_response.dig("status") == "incomplete"
raise Provider::Openai::Error.new("OpenAI returned a failed or incomplete response", { chunk: chunk })
end
response_output = raw_response.dig("output")
functions_output = if executed_functions.any?
executed_functions
else
extract_pending_functions(response_output)
end
Provider::LlmProvider::ChatResponse.new(
id: raw_response.dig("id"),
messages: extract_messages(response_output),
functions: functions_output,
model: raw_response.dig("model")
)
end
def chat
message.chat
end
def model
message.ai_model
end
def previous_openai_response_id
chat.latest_assistant_response_id
end
# Since we're using OpenAI's conversation state management, all we need to pass
# to input is the user message we're currently responding to.
def input
[ { role: "user", content: message.content } ]
end
def extract_messages(response_output)
message_items = response_output.filter { |item| item.dig("type") == "message" }
message_items.map do |item|
output_text = item.dig("content").map do |content|
text = content.dig("text")
refusal = content.dig("refusal")
text || refusal
end.flatten.join("\n")
Provider::LlmProvider::Message.new(
id: item.dig("id"),
content: output_text,
)
end
end
def extract_pending_functions(response_output)
response_output.filter { |item| item.dig("type") == "function_call" }.map do |item|
PendingFunction.new(
id: item.dig("id"),
call_id: item.dig("call_id"),
name: item.dig("name"),
arguments: item.dig("arguments"),
)
end
end
def execute_pending_functions(pending_functions)
pending_functions.map do |pending_function|
execute_function(pending_function)
end
end
def execute_function(fn)
fn_instance = available_functions.find { |f| f.name == fn.name }
parsed_args = JSON.parse(fn.arguments)
result = fn_instance.call(parsed_args)
Provider::LlmProvider::FunctionExecution.new(
id: fn.id,
call_id: fn.call_id,
name: fn.name,
arguments: parsed_args,
result: result
)
rescue => e
fn_execution_details = {
fn_name: fn.name,
fn_args: parsed_args
}
raise Provider::Openai::Error.new(e, fn_execution_details)
end
def tools
available_functions.map do |fn|
{
type: "function",
name: fn.name,
description: fn.description,
parameters: fn.params_schema,
strict: fn.strict_mode?
}
end
end
end