Arganteal accepts applications from direct candidates only.
We do not work with third-party recruiters or staffing agencies.
Required Country Location: Costa Rica, Peru, Argentina, Brazil, Columbia, South Africa, Mexico, or Panama.
This is full time work at 40 hours per week
Overview
Our client seeks a motivated
Senior Developer, Data & AI
to join their team in developing a groundbreaking, modular platform built from the ground up. Our client digitizes and contextualizes
multi-modal sensor data
from both digital and physical environments into specialized
time-series, graph, and vector databases
, powering real-time analytics, compliance, and AI-driven context mapping. This role is ideal for a creative problem-solver who thrives at the intersection of
data engineering, distributed systems, and applied AI
.
Key Responsibilities
Platform Design & Development
Architect, develop, and deploy
core modules
(Data, Access, & Agent's) for end-to-end data ingestion, contextualization, and visualization.
Design and code
sensor collection agents
across heterogeneous systems (Windows, Linux, macOS, mobile, IoT).
Implement
real-time ingestion pipelines
using technologies like
Apache Kafka, Apache NiFi, Redis Streams, or AWS Kinesis
.
Persist and query multi-modal data across
time-series (MongoDB, InfluxDB, TimescaleDB), graph (Neo4j), and vector databases (Qdrant, FAISS, Pinecone, or Weaviate)
.
API & Data Access Layer
Build secure, scalable
RESTful and GraphQL APIs
for exposing platform data models, sensor configuration, and reporting.
Implement a
unified Database Access Layer (DBAL)
to abstract query logic across multiple databases.
Experiment with or extend
Model Context Protocol (MCP)
or a similar standardized data interchange for multi-DB, multi-agent interoperability.
System Integration & Data Streaming
Develop
low-latency data pipelines
for transporting and transforming event streams (syslog, telemetry, keystrokes, IoT feeds, cloud service logs).
Collaborate with frontend engineers to connect
Access (visual mapping UI)
with back-end pipelines.
Optimization & Scalability
Optimize
database query performance
using down-sampling, partitioning, and caching techniques.
Design solutions for
horizontal scaling and containerized deployment (Docker, Kubernetes, OpenShift)
.
Apply a
"MacGyver-mindset"
for rapid prototyping and iterative refinement under real-world constraints.
Collaboration & Mentoring
Work directly with
compliance officers, security analysts, and business process owners
to refine data models for regulatory and operational needs.
Conduct code reviews, mentor junior developers, and promote best practices across the team.