Future Anthem will guide you through three steps to unlock the power of Amplifier AI
Step 1: Data Alignment
Amplifier AI requires three data types:
- Anonymised Transactional Data - to reflect real-time player activity at a transactional level, typically sourced from your back-end platform.
- Game and Bet Metadata - to categorise and analyse game similarities and betting patterns accurately - providing relevant insights and recommendations.
- Player Metadata - to enhance Amplifier AI’s personalisation models.
Example fields of Player Metadata:
- Player Registration Date (essential)
- First Game Played (recommended)
- First Play Date (recommended)
- Acquisition Channel (recommended)
We do not require personal data; only unique player identifiers are needed to safeguard privacy.
Future Anthem has extensive experience deploying custom solutions across multiple Gaming and Sports platforms, successfully integrating Amplifier AI with diverse environments, and addressing challenges such as data standardisation and real-time synchronisation.
Please review the typical data requirements in each vertical relevant for you:
Casino & Sports.
Casino integration
Amplifier AI’s flexible data structure ensures seamless integration, requiring ‘player session’ data as well as ‘available games’ – typically as sourced from your back-end platform.
Our team will map your data to Amplifier AI’s data structure, so it’s helpful to provide your schema documentation.
Amplifier AI accepts data in formats such as JSON and CSV and supports both flat and nested schemas. We also accommodate most compression types to ensure a smooth integration process.
Granular Transactional Level Data – example of typical fields needed:
- Player ID, Game ID, RoundID
- Date, Time
- Site (any hierarchy on Network/ Operator/Site/Skin, etc.)
- Balance Type Used (Free spin, Promo Spin, Account balance)
- Jackpot Payout, Stake, Payout, Balance, Currency
Game meta data
Amplifier AI utilises data about the available games, such as: Game id, game name, Game type (Slots, Instant win, Table Games, etc.), provider, launch date, RTP, Volatility, Theme, Win lines, Min bet, Max bet, Default bet, Jackpot type, Layout.
This data is needed to support Amplifier AI recommendations models in accurately categorising and analysing game similarities, game performance, and betting patterns.
Sports integration
Amplifier AI’s flexible data structure ensures seamless integration, requiring players betting data and available betting –typically as sourced from your back-end platform.
Our team will map your data to Amplifier AI’s internal data structure, so it’s helpful to provide your schema documentation.
Amplifier AI accepts data in formats such as JSON and CSV and supports both flat and nested schemas.
We also accommodate most compression types to ensure a smooth integration process.
Granular Transactional Level Data – example of typical fields needed:
- Player ID, Bet ID
- Bet placement date time, Bet settlement date time
- Bet type reference
- Bet amount, Currency
- Sport ID, Sport name
Bet meta data
Amplifier AI utilises data about the available future sports betting.
This data is needed to support Amplifier AI recommendations models in accurately categorising and analysing bets similarities & markets.
Step 2: Data Ingestion
Overview
Future Anthem provides both standard and custom integration options for real-time or batch data transfers.
Data can be shared by the client or "pulled" by Anthem directly from backend systems.
Real-time data is preferred, as it enables Amplifier AI to deliver instant player interventions, such as personalised bonuses or churn prevention within seconds of a bet or spin. In contrast, batch data, which is processed periodically, cannot provide the same immediacy or relevance, leading to missed opportunities for engagement and retention.
Our secure, scalable data platform is hosted on Microsoft Azure. Initial data uploads can be made directly to our write-only, private landing zone within Azure. These uploads can be completed via API, Azure Storage Explorer, the AZCOPY command-line tool, or SFTP.
See below the different integration options and asses which fits best to your organisation.
Standard Integrations
Amplifier AI’s standard Real-time Data Integration uses Azure Event Hubs, which enables clients to share their data with a widely supported technology.
Real-time Data
Typical Integrations options, Sending Events via:
- REST API
- AMQP 1.0
- SDK - available for all major programming languages as Python, .NET Java and more.
- Kafka Producer API
Batch Data
Bespoke Integrations
If none of the above options is suitable for your organisation, Anthem can deliver a bespoke integration based on your requirements.
Anthem has built a range of successful client integrations, below are some examples.
Real-time Data
Future Anthem has successfully integrated clients via a direct connection to:
- Client Kafka brokers to consume real-time feeds from Kafka
- Client RabbitMQ brokers
- Various databases and DWH implementations to consume real-time feeds via change data capture. Examples include Postgres and Snowflake
Anthem’s bespoke connectors support:
- Static IPs for IP whitelisting
- SSL with self-signed certificates
- Authentication via username and password or via certificates
Batch Data
Future Anthem can ingest batch files directly from client file servers or blob storage, and can also host a file server or blob storage on client’s behalf.
Future Anthem has successfully integrated clients via:
- SFTP
- Blob storage in Azure and AWS S3
- Ingestion on time schedules
- Ingestion triggered by notifications
Step 3: Data Delivery
Standard Integrations
Amplifier AI’s standard integration provides clients ability to Download files via API.
Bespoke Integrations
Future Anthem can also deliver files directly to client systems using a wide range of bespoke integration options.
Future Anthem supports:
- SFTP
- Cloud Storage in all major cloud providers such as Azure and AWS S3
- Uploading data via client hosted APIs
As part of your onboarding, Future Anthem will conduct an Integration Planning Workshop, this will help evaluate your current data framework and tailor an integration plan suited to your needs.
Support
This guide answers common FAQs to ensure a smooth and efficient onboarding process. These FAQs cover various aspects of the integration, from timelines and support to data security and system compatibility.
Throughout the onboarding and implementation process each customer will be assigned to a dedicated Customer Success representative who will ensure maximum value is derived from our products.