mirror of
https://github.com/maybe-finance/maybe.git
synced 2025-07-25 08:09:38 +02:00
Personal finance AI (v1) (#2022)
* AI sidebar * Add chat and message models with associations * Implement AI chat functionality with sidebar and messaging system - Add chat and messages controllers - Create chat and message views - Implement chat-related routes - Add message broadcasting and user interactions - Update application layout to support chat sidebar - Enhance user model with initials method * Refactor AI sidebar with enhanced chat menu and interactions - Update sidebar layout with dynamic width and improved responsiveness - Add new chat menu Stimulus controller for toggling between chat and chat list views - Improve chat list display with recent chats and empty state - Extract AI avatar to a partial for reusability - Enhance message display and interaction styling - Add more contextual buttons and interaction hints * Improve chat scroll behavior and message styling - Refactor chat scroll functionality with Stimulus controller - Optimize message scrolling in chat views - Update message styling for better visual hierarchy - Enhance chat container layout with flex and auto-scroll - Simplify message rendering across different chat views * Extract AI avatar to a shared partial for consistent styling - Refactor AI avatar rendering across chat views - Replace hardcoded avatar markup with a reusable partial - Simplify avatar display in chats and messages views * Update sidebar controller to handle right panel width dynamically - Add conditional width class for right sidebar panel - Ensure consistent sidebar toggle behavior for both left and right panels - Use specific width class for right panel (w-[375px]) * Refactor chat form and AI greeting with flexible partials - Extract message form to a reusable partial with dynamic context support - Create flexible AI greeting partial for consistent welcome messages - Simplify chat and sidebar views by leveraging new partials - Add support for different form scenarios (chat, new chat, sidebar) - Improve code modularity and reduce duplication * Add chat clearing functionality with dynamic menu options - Implement clear chat action in ChatsController - Add clear chat route to support clearing messages - Update AI sidebar with dropdown menu for chat actions - Preserve system message when clearing chat - Enhance chat interaction with new menu options * Add frontmatter to project structure documentation - Create initial frontmatter for structure.mdc file - Include description and configuration options - Prepare for potential dynamic documentation rendering * Update general project rules with additional guidelines - Add rule for using `Current.family` instead of `current_family` - Include new guidelines for testing, API routes, and solution approach - Expand project-specific rules for more consistent development practices * Add OpenAI gem and AI-friendly data representations - Add `ruby-openai` gem for AI integration - Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement - Include Promptable module in both models - Add savings rate calculation method in IncomeStatement - Prepare financial models for AI-powered insights and interactions * Enhance AI Financial Assistant with Advanced Querying and Debugging Capabilities - Implement comprehensive AI financial query system with function-based interactions - Add detailed debug logging for AI responses and function calls - Extend BalanceSheet and IncomeStatement models with AI-friendly methods - Create robust error handling and fallback mechanisms for AI queries - Update chat and message views to support debug mode and enhanced rendering - Add AI query routes and initial test coverage for financial assistant * Refactor AI sidebar and chat layout with improved structure and comments - Remove inline AI chat from application layout - Enhance AI sidebar with more semantic HTML structure - Add descriptive comments to clarify different sections of chat view - Improve flex layout and scrolling behavior in chat messages container - Optimize message rendering with more explicit class names and structure * Add Markdown rendering support for AI chat messages - Implement `markdown` helper method in ApplicationHelper using Redcarpet - Update message view to render AI messages with Markdown formatting - Add comprehensive Markdown rendering options (tables, code blocks, links) - Enhance AI Financial Assistant prompt to encourage Markdown usage - Remove commented Markdown CSS in Tailwind application stylesheet * Missing comma * Enhance AI response processing with chat history context * Improve AI debug logging with payload size limits and internal message flag * Enhance AI chat interaction with improved thinking indicator and scrolling behavior * Add AI consent and enable/disable functionality for AI chat * Upgrade Biome and refactor JavaScript template literals - Update @biomejs/biome to latest version with caret (^) notation - Refactor AI query and chat controllers to use template literals - Standardize npm scripts formatting in package.json * Add beta testing usage note to AI consent modal * Update test fixtures and configurations for AI chat functionality - Add family association to chat fixtures and tests - Set consistent password digest for test users - Enable AI for test users - Add OpenAI access token for test environment - Update chat and user model tests to include family context * Simplify data model and get tests passing * Remove structure.mdc from version control * Integrate AI chat styles into existing prose pattern * Match Figma design spec, implement Turbo frames and actions for chats controller * AI rules refresh * Consolidate Stimulus controllers, thinking state, controllers, and views * Naming, domain alignment * Reset migrations * Improve data model to support tool calls and message types * Tool calling tests and fixtures * Tool call implementation and test * Get assistant test working again * Test updates * Process tool calls within provider * Chat UI back to working state again * Remove stale code * Tests passing * Update openai class naming to avoid conflicts * Reconfigure test env * Rebuild gemfile * Fix naming conflicts for ChatResponse * Message styles * Use OpenAI conversation state management * Assistant function base implementation * Add back thinking messages, clean up error handling for chat * Fix sync error when security price has bad data from provider * Add balance sheet function to assistant * Add better function calling error visibility * Add income statement function * Simplify and clean up "thinking" interactions with Turbo frames * Remove stale data definitions from functions * Ensure VCR fixtures working with latest code * basic stream implementation * Get streaming working * Make AI sidebar wider when left sidebar is collapsed * Get tests working with streaming responses * Centralize provider error handling * Provider data boundaries --------- Co-authored-by: Josh Pigford <josh@joshpigford.com>
This commit is contained in:
parent
8e6b81af77
commit
2f6b11c18f
126 changed files with 3576 additions and 462 deletions
14
app/models/provider/exchange_rate_provider.rb
Normal file
14
app/models/provider/exchange_rate_provider.rb
Normal file
|
@ -0,0 +1,14 @@
|
|||
module Provider::ExchangeRateProvider
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
def fetch_exchange_rate(from:, to:, date:)
|
||||
raise NotImplementedError, "Subclasses must implement #fetch_exchange_rate"
|
||||
end
|
||||
|
||||
def fetch_exchange_rates(from:, to:, start_date:, end_date:)
|
||||
raise NotImplementedError, "Subclasses must implement #fetch_exchange_rates"
|
||||
end
|
||||
|
||||
private
|
||||
Rate = Data.define(:date, :from, :to, :rate)
|
||||
end
|
13
app/models/provider/llm_provider.rb
Normal file
13
app/models/provider/llm_provider.rb
Normal file
|
@ -0,0 +1,13 @@
|
|||
module Provider::LlmProvider
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
def chat_response(message, instructions: nil, available_functions: [], streamer: nil)
|
||||
raise NotImplementedError, "Subclasses must implement #chat_response"
|
||||
end
|
||||
|
||||
private
|
||||
StreamChunk = Data.define(:type, :data)
|
||||
ChatResponse = Data.define(:id, :messages, :functions, :model)
|
||||
Message = Data.define(:id, :content)
|
||||
FunctionExecution = Data.define(:id, :call_id, :name, :arguments, :result)
|
||||
end
|
30
app/models/provider/openai.rb
Normal file
30
app/models/provider/openai.rb
Normal file
|
@ -0,0 +1,30 @@
|
|||
class Provider::Openai < Provider
|
||||
include LlmProvider
|
||||
|
||||
MODELS = %w[gpt-4o]
|
||||
|
||||
def initialize(access_token)
|
||||
@client = ::OpenAI::Client.new(access_token: access_token)
|
||||
end
|
||||
|
||||
def supports_model?(model)
|
||||
MODELS.include?(model)
|
||||
end
|
||||
|
||||
def chat_response(message, instructions: nil, available_functions: [], streamer: nil)
|
||||
with_provider_response do
|
||||
processor = ChatResponseProcessor.new(
|
||||
client: client,
|
||||
message: message,
|
||||
instructions: instructions,
|
||||
available_functions: available_functions,
|
||||
streamer: streamer
|
||||
)
|
||||
|
||||
processor.process
|
||||
end
|
||||
end
|
||||
|
||||
private
|
||||
attr_reader :client
|
||||
end
|
188
app/models/provider/openai/chat_response_processor.rb
Normal file
188
app/models/provider/openai/chat_response_processor.rb
Normal file
|
@ -0,0 +1,188 @@
|
|||
class Provider::Openai::ChatResponseProcessor
|
||||
def initialize(message:, client:, instructions: nil, available_functions: [], streamer: nil)
|
||||
@client = client
|
||||
@message = message
|
||||
@instructions = instructions
|
||||
@available_functions = available_functions
|
||||
@streamer = streamer
|
||||
end
|
||||
|
||||
def process
|
||||
first_response = fetch_response(previous_response_id: previous_openai_response_id)
|
||||
|
||||
if first_response.functions.empty?
|
||||
if streamer.present?
|
||||
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "response", data: first_response))
|
||||
end
|
||||
|
||||
return first_response
|
||||
end
|
||||
|
||||
executed_functions = execute_pending_functions(first_response.functions)
|
||||
|
||||
follow_up_response = fetch_response(
|
||||
executed_functions: executed_functions,
|
||||
previous_response_id: first_response.id
|
||||
)
|
||||
|
||||
if streamer.present?
|
||||
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "response", data: follow_up_response))
|
||||
end
|
||||
|
||||
follow_up_response
|
||||
end
|
||||
|
||||
private
|
||||
attr_reader :client, :message, :instructions, :available_functions, :streamer
|
||||
|
||||
PendingFunction = Data.define(:id, :call_id, :name, :arguments)
|
||||
|
||||
def fetch_response(executed_functions: [], previous_response_id: nil)
|
||||
function_results = executed_functions.map do |executed_function|
|
||||
{
|
||||
type: "function_call_output",
|
||||
call_id: executed_function.call_id,
|
||||
output: executed_function.result.to_json
|
||||
}
|
||||
end
|
||||
|
||||
prepared_input = input + function_results
|
||||
|
||||
# No need to pass tools for follow-up messages that provide function results
|
||||
prepared_tools = executed_functions.empty? ? tools : []
|
||||
|
||||
raw_response = nil
|
||||
|
||||
internal_streamer = proc do |chunk|
|
||||
type = chunk.dig("type")
|
||||
|
||||
if streamer.present?
|
||||
case type
|
||||
when "response.output_text.delta", "response.refusal.delta"
|
||||
# We don't distinguish between text and refusal yet, so stream both the same
|
||||
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "output_text", data: chunk.dig("delta")))
|
||||
when "response.function_call_arguments.done"
|
||||
streamer.call(Provider::LlmProvider::StreamChunk.new(type: "function_request", data: chunk.dig("arguments")))
|
||||
end
|
||||
end
|
||||
|
||||
if type == "response.completed"
|
||||
raw_response = chunk.dig("response")
|
||||
end
|
||||
end
|
||||
|
||||
client.responses.create(parameters: {
|
||||
model: model,
|
||||
input: prepared_input,
|
||||
instructions: instructions,
|
||||
tools: prepared_tools,
|
||||
previous_response_id: previous_response_id,
|
||||
stream: internal_streamer
|
||||
})
|
||||
|
||||
if raw_response.dig("status") == "failed" || raw_response.dig("status") == "incomplete"
|
||||
raise Provider::Openai::Error.new("OpenAI returned a failed or incomplete response", { chunk: chunk })
|
||||
end
|
||||
|
||||
response_output = raw_response.dig("output")
|
||||
|
||||
functions_output = if executed_functions.any?
|
||||
executed_functions
|
||||
else
|
||||
extract_pending_functions(response_output)
|
||||
end
|
||||
|
||||
Provider::LlmProvider::ChatResponse.new(
|
||||
id: raw_response.dig("id"),
|
||||
messages: extract_messages(response_output),
|
||||
functions: functions_output,
|
||||
model: raw_response.dig("model")
|
||||
)
|
||||
end
|
||||
|
||||
def chat
|
||||
message.chat
|
||||
end
|
||||
|
||||
def model
|
||||
message.ai_model
|
||||
end
|
||||
|
||||
def previous_openai_response_id
|
||||
chat.latest_assistant_response_id
|
||||
end
|
||||
|
||||
# Since we're using OpenAI's conversation state management, all we need to pass
|
||||
# to input is the user message we're currently responding to.
|
||||
def input
|
||||
[ { role: "user", content: message.content } ]
|
||||
end
|
||||
|
||||
def extract_messages(response_output)
|
||||
message_items = response_output.filter { |item| item.dig("type") == "message" }
|
||||
|
||||
message_items.map do |item|
|
||||
output_text = item.dig("content").map do |content|
|
||||
text = content.dig("text")
|
||||
refusal = content.dig("refusal")
|
||||
|
||||
text || refusal
|
||||
end.flatten.join("\n")
|
||||
|
||||
Provider::LlmProvider::Message.new(
|
||||
id: item.dig("id"),
|
||||
content: output_text,
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
def extract_pending_functions(response_output)
|
||||
response_output.filter { |item| item.dig("type") == "function_call" }.map do |item|
|
||||
PendingFunction.new(
|
||||
id: item.dig("id"),
|
||||
call_id: item.dig("call_id"),
|
||||
name: item.dig("name"),
|
||||
arguments: item.dig("arguments"),
|
||||
)
|
||||
end
|
||||
end
|
||||
|
||||
def execute_pending_functions(pending_functions)
|
||||
pending_functions.map do |pending_function|
|
||||
execute_function(pending_function)
|
||||
end
|
||||
end
|
||||
|
||||
def execute_function(fn)
|
||||
fn_instance = available_functions.find { |f| f.name == fn.name }
|
||||
parsed_args = JSON.parse(fn.arguments)
|
||||
result = fn_instance.call(parsed_args)
|
||||
|
||||
Provider::LlmProvider::FunctionExecution.new(
|
||||
id: fn.id,
|
||||
call_id: fn.call_id,
|
||||
name: fn.name,
|
||||
arguments: parsed_args,
|
||||
result: result
|
||||
)
|
||||
rescue => e
|
||||
fn_execution_details = {
|
||||
fn_name: fn.name,
|
||||
fn_args: parsed_args
|
||||
}
|
||||
|
||||
raise Provider::Openai::Error.new(e, fn_execution_details)
|
||||
end
|
||||
|
||||
def tools
|
||||
available_functions.map do |fn|
|
||||
{
|
||||
type: "function",
|
||||
name: fn.name,
|
||||
description: fn.description,
|
||||
parameters: fn.params_schema,
|
||||
strict: fn.strict_mode?
|
||||
}
|
||||
end
|
||||
end
|
||||
end
|
13
app/models/provider/openai/chat_streamer.rb
Normal file
13
app/models/provider/openai/chat_streamer.rb
Normal file
|
@ -0,0 +1,13 @@
|
|||
# A stream proxy for OpenAI chat responses
|
||||
#
|
||||
# - Consumes an OpenAI chat response stream
|
||||
# - Outputs a generic "Chat Provider Stream" interface to consumers (e.g. `Assistant`)
|
||||
class Provider::Openai::ChatStreamer
|
||||
def initialize(output_stream)
|
||||
@output_stream = output_stream
|
||||
end
|
||||
|
||||
def call(chunk)
|
||||
@output_stream.call(chunk)
|
||||
end
|
||||
end
|
91
app/models/provider/registry.rb
Normal file
91
app/models/provider/registry.rb
Normal file
|
@ -0,0 +1,91 @@
|
|||
class Provider::Registry
|
||||
include ActiveModel::Validations
|
||||
|
||||
Error = Class.new(StandardError)
|
||||
|
||||
CONCEPTS = %i[exchange_rates securities llm]
|
||||
|
||||
validates :concept, inclusion: { in: CONCEPTS }
|
||||
|
||||
class << self
|
||||
def for_concept(concept)
|
||||
new(concept.to_sym)
|
||||
end
|
||||
|
||||
def get_provider(name)
|
||||
send(name)
|
||||
rescue NoMethodError
|
||||
raise Error.new("Provider '#{name}' not found in registry")
|
||||
end
|
||||
|
||||
private
|
||||
def synth
|
||||
api_key = ENV.fetch("SYNTH_API_KEY", Setting.synth_api_key)
|
||||
|
||||
return nil unless api_key.present?
|
||||
|
||||
Provider::Synth.new(api_key)
|
||||
end
|
||||
|
||||
def plaid_us
|
||||
config = Rails.application.config.plaid
|
||||
|
||||
return nil unless config.present?
|
||||
|
||||
Provider::Plaid.new(config, region: :us)
|
||||
end
|
||||
|
||||
def plaid_eu
|
||||
config = Rails.application.config.plaid_eu
|
||||
|
||||
return nil unless config.present?
|
||||
|
||||
Provider::Plaid.new(config, region: :eu)
|
||||
end
|
||||
|
||||
def github
|
||||
Provider::Github.new
|
||||
end
|
||||
|
||||
def openai
|
||||
access_token = ENV.fetch("OPENAI_ACCESS_TOKEN", Setting.openai_access_token)
|
||||
|
||||
return nil unless access_token.present?
|
||||
|
||||
Provider::Openai.new(access_token)
|
||||
end
|
||||
end
|
||||
|
||||
def initialize(concept)
|
||||
@concept = concept
|
||||
validate!
|
||||
end
|
||||
|
||||
def providers
|
||||
available_providers.map { |p| self.class.send(p) }
|
||||
end
|
||||
|
||||
def get_provider(name)
|
||||
provider_method = available_providers.find { |p| p == name.to_sym }
|
||||
|
||||
raise Error.new("Provider '#{name}' not found for concept: #{concept}") unless provider_method.present?
|
||||
|
||||
self.class.send(provider_method)
|
||||
end
|
||||
|
||||
private
|
||||
attr_reader :concept
|
||||
|
||||
def available_providers
|
||||
case concept
|
||||
when :exchange_rates
|
||||
%i[synth]
|
||||
when :securities
|
||||
%i[synth]
|
||||
when :llm
|
||||
%i[openai]
|
||||
else
|
||||
%i[synth plaid_us plaid_eu github openai]
|
||||
end
|
||||
end
|
||||
end
|
24
app/models/provider/security_provider.rb
Normal file
24
app/models/provider/security_provider.rb
Normal file
|
@ -0,0 +1,24 @@
|
|||
module Provider::SecurityProvider
|
||||
extend ActiveSupport::Concern
|
||||
|
||||
def search_securities(symbol, country_code: nil, exchange_operating_mic: nil)
|
||||
raise NotImplementedError, "Subclasses must implement #search_securities"
|
||||
end
|
||||
|
||||
def fetch_security_info(security)
|
||||
raise NotImplementedError, "Subclasses must implement #fetch_security_info"
|
||||
end
|
||||
|
||||
def fetch_security_price(security, date:)
|
||||
raise NotImplementedError, "Subclasses must implement #fetch_security_price"
|
||||
end
|
||||
|
||||
def fetch_security_prices(security, start_date:, end_date:)
|
||||
raise NotImplementedError, "Subclasses must implement #fetch_security_prices"
|
||||
end
|
||||
|
||||
private
|
||||
Security = Data.define(:symbol, :name, :logo_url, :exchange_operating_mic)
|
||||
SecurityInfo = Data.define(:symbol, :name, :links, :logo_url, :description, :kind)
|
||||
Price = Data.define(:security, :date, :price, :currency)
|
||||
end
|
|
@ -1,20 +1,19 @@
|
|||
class Provider::Synth < Provider
|
||||
include ExchangeRate::Provideable
|
||||
include Security::Provideable
|
||||
include ExchangeRateProvider, SecurityProvider
|
||||
|
||||
def initialize(api_key)
|
||||
@api_key = api_key
|
||||
end
|
||||
|
||||
def healthy?
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
response = client.get("#{base_url}/user")
|
||||
JSON.parse(response.body).dig("id").present?
|
||||
end
|
||||
end
|
||||
|
||||
def usage
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
response = client.get("#{base_url}/user")
|
||||
|
||||
parsed = JSON.parse(response.body)
|
||||
|
@ -37,7 +36,7 @@ class Provider::Synth < Provider
|
|||
# ================================
|
||||
|
||||
def fetch_exchange_rate(from:, to:, date:)
|
||||
provider_response retries: 2 do
|
||||
with_provider_response retries: 2 do
|
||||
response = client.get("#{base_url}/rates/historical") do |req|
|
||||
req.params["date"] = date.to_s
|
||||
req.params["from"] = from
|
||||
|
@ -46,19 +45,12 @@ class Provider::Synth < Provider
|
|||
|
||||
rates = JSON.parse(response.body).dig("data", "rates")
|
||||
|
||||
ExchangeRate::Provideable::FetchRateData.new(
|
||||
rate: ExchangeRate.new(
|
||||
from_currency: from,
|
||||
to_currency: to,
|
||||
date: date,
|
||||
rate: rates.dig(to)
|
||||
)
|
||||
)
|
||||
Rate.new(date:, from:, to:, rate: rates.dig(to))
|
||||
end
|
||||
end
|
||||
|
||||
def fetch_exchange_rates(from:, to:, start_date:, end_date:)
|
||||
provider_response retries: 1 do
|
||||
with_provider_response retries: 1 do
|
||||
data = paginate(
|
||||
"#{base_url}/rates/historical-range",
|
||||
from: from,
|
||||
|
@ -69,16 +61,9 @@ class Provider::Synth < Provider
|
|||
body.dig("data")
|
||||
end
|
||||
|
||||
ExchangeRate::Provideable::FetchRatesData.new(
|
||||
rates: data.paginated.map do |exchange_rate|
|
||||
ExchangeRate.new(
|
||||
from_currency: from,
|
||||
to_currency: to,
|
||||
date: exchange_rate.dig("date"),
|
||||
rate: exchange_rate.dig("rates", to)
|
||||
)
|
||||
end
|
||||
)
|
||||
data.paginated.map do |rate|
|
||||
Rate.new(date: rate.dig("date"), from:, to:, rate: rate.dig("rates", to))
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
@ -87,7 +72,7 @@ class Provider::Synth < Provider
|
|||
# ================================
|
||||
|
||||
def search_securities(symbol, country_code: nil, exchange_operating_mic: nil)
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
response = client.get("#{base_url}/tickers/search") do |req|
|
||||
req.params["name"] = symbol
|
||||
req.params["dataset"] = "limited"
|
||||
|
@ -98,24 +83,19 @@ class Provider::Synth < Provider
|
|||
|
||||
parsed = JSON.parse(response.body)
|
||||
|
||||
Security::Provideable::Search.new(
|
||||
securities: parsed.dig("data").map do |security|
|
||||
Security.new(
|
||||
ticker: security.dig("symbol"),
|
||||
name: security.dig("name"),
|
||||
logo_url: security.dig("logo_url"),
|
||||
exchange_acronym: security.dig("exchange", "acronym"),
|
||||
exchange_mic: security.dig("exchange", "mic_code"),
|
||||
exchange_operating_mic: security.dig("exchange", "operating_mic_code"),
|
||||
country_code: security.dig("exchange", "country_code")
|
||||
)
|
||||
end
|
||||
)
|
||||
parsed.dig("data").map do |security|
|
||||
Security.new(
|
||||
symbol: security.dig("symbol"),
|
||||
name: security.dig("name"),
|
||||
logo_url: security.dig("logo_url"),
|
||||
exchange_operating_mic: security.dig("exchange", "operating_mic_code"),
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def fetch_security_info(security)
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
response = client.get("#{base_url}/tickers/#{security.ticker}") do |req|
|
||||
req.params["mic_code"] = security.exchange_mic if security.exchange_mic.present?
|
||||
req.params["operating_mic"] = security.exchange_operating_mic if security.exchange_operating_mic.present?
|
||||
|
@ -123,8 +103,8 @@ class Provider::Synth < Provider
|
|||
|
||||
data = JSON.parse(response.body).dig("data")
|
||||
|
||||
Security::Provideable::SecurityInfo.new(
|
||||
ticker: security.ticker,
|
||||
SecurityInfo.new(
|
||||
symbol: data.dig("ticker"),
|
||||
name: data.dig("name"),
|
||||
links: data.dig("links"),
|
||||
logo_url: data.dig("logo_url"),
|
||||
|
@ -135,19 +115,17 @@ class Provider::Synth < Provider
|
|||
end
|
||||
|
||||
def fetch_security_price(security, date:)
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
historical_data = fetch_security_prices(security, start_date: date, end_date: date)
|
||||
|
||||
raise ProviderError, "No prices found for security #{security.ticker} on date #{date}" if historical_data.data.prices.empty?
|
||||
raise ProviderError, "No prices found for security #{security.ticker} on date #{date}" if historical_data.data.empty?
|
||||
|
||||
Security::Provideable::PriceData.new(
|
||||
price: historical_data.data.prices.first
|
||||
)
|
||||
historical_data.data.first
|
||||
end
|
||||
end
|
||||
|
||||
def fetch_security_prices(security, start_date:, end_date:)
|
||||
provider_response retries: 1 do
|
||||
with_provider_response retries: 1 do
|
||||
params = {
|
||||
start_date: start_date,
|
||||
end_date: end_date
|
||||
|
@ -167,16 +145,14 @@ class Provider::Synth < Provider
|
|||
exchange_mic = data.first_page.dig("exchange", "mic_code")
|
||||
exchange_operating_mic = data.first_page.dig("exchange", "operating_mic_code")
|
||||
|
||||
Security::Provideable::PricesData.new(
|
||||
prices: data.paginated.map do |price|
|
||||
Security::Price.new(
|
||||
security: security,
|
||||
date: price.dig("date"),
|
||||
price: price.dig("close") || price.dig("open"),
|
||||
currency: currency
|
||||
)
|
||||
end
|
||||
)
|
||||
data.paginated.map do |price|
|
||||
Price.new(
|
||||
security: security,
|
||||
date: price.dig("date"),
|
||||
price: price.dig("close") || price.dig("open"),
|
||||
currency: currency
|
||||
)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
@ -185,7 +161,7 @@ class Provider::Synth < Provider
|
|||
# ================================
|
||||
|
||||
def enrich_transaction(description, amount: nil, date: nil, city: nil, state: nil, country: nil)
|
||||
provider_response do
|
||||
with_provider_response do
|
||||
params = {
|
||||
description: description,
|
||||
amount: amount,
|
||||
|
@ -216,9 +192,7 @@ class Provider::Synth < Provider
|
|||
[
|
||||
Faraday::TimeoutError,
|
||||
Faraday::ConnectionFailed,
|
||||
Faraday::SSLError,
|
||||
Faraday::ClientError,
|
||||
Faraday::ServerError
|
||||
Faraday::SSLError
|
||||
]
|
||||
end
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue