Job Title | Budget | ||||
---|---|---|---|---|---|
Technical Presentation & Slide Deck Writer (Observability Field)
|
10 - 35 USD
/ hr
|
3 days ago |
Client Rank
- Medium
$517 total spent
3 hires
1 jobs posted
100% hire rate,
open job
5.00
of 2 reviews
|
||
We are looking for someone who can help us create technical presentations, slides, and reports in the observability and DevOps field. He/She should be able to understand technical topics like monitoring, logging, metrics, and tracing, and turn that information into clear and professional PowerPoint slides and business reports.
What You’ll Do: Make clean, easy-to-understand PowerPoint presentations from technical content Write reports and documents for business use, based on technical data Help explain complex ideas in a simple and structured way Work with our team to gather the right information What We’re Looking For: Basic knowledge of observability tools (like Prometheus, Grafana, Datadog, etc.) Experience creating PowerPoint slides or Google Slides Able to write clear business documents (like assessments and reports) Comfortable working with tech teams (DevOps/SRE) Good communication skills
Skills: Microsoft PowerPoint, Presentation Design, Presentations, Google Slides
Hourly rate:
10 - 35 USD
3 days ago
|
|||||
Personal Ethical Profile Creation Platform
|
250 - 750 USD | 3 days ago |
Client Rank
- Risky
$107 795 total spent
89 hires
1 open job
4.95
of 12 reviews
Registered at: 03/07/2006
|
||
The platform should allow users to create a personal ethical profile through a survey system. Here are the key technical requirements:
**1. Frontend (Survey and Profile Creation):** - Clear and intuitive user interface for user registration and authentication also via OAuth 2.0. - Dynamic system for displaying questions with options for multiple or single-choice responses, and optional image attachments. - Users should be able to save their profiles, review previously answered questions, modify responses at a later date, and continuously refine their ethical profiles. - Advanced search functionality and filtering by answered/unanswered questions, category, or date. - Option for users to opt-in or opt-out of daily email notifications containing new questions. **2. Backend (Question Creation and Management):** - Secure CMS system for administrative management of questions, categories, and response options. - Advanced management of registered users, ethical profiles, monitoring responses, and user activity. - Administrative dashboard for statistical and analytical visualization of collected data. - Automatic scheduling functionality for sending daily emails with new questions to users who have activated this feature. - Automated system to send users standardized email invitations directed toward their AI service providers, encouraging the use of platform APIs to ensure ethical alignment consistent with the user's personal profile. **3. Infrastructure and Scalability:** - Highly scalable cloud infrastructure (AWS, GCP, Azure), utilizing a relational database (e.g., PostgreSQL) and caching layer (Redis or Memcached). - Architecture based on containerized microservices (Docker, Kubernetes). - Load balancing for optimal request handling and automatic resource auto-scaling to handle traffic peaks. - CDN implementation for optimal delivery of static content and media. - Continuous monitoring systems (e.g., Prometheus, Grafana) to ensure stability, availability, and optimal performance. - Periodic automatic backups and rapid recovery strategies in case of issues. This infrastructure will ensure high performance and reliability, even with sudden growth in user numbers and interactions. Skills: PHP, JavaScript, Software Architecture, MySQL, HTML
Fixed budget:
250 - 750 USD
3 days ago
|
|||||
OpenTelemetry Expert Needed for Cloudflare Workers Issue
|
35 - 75 USD
/ hr
|
3 days ago |
Client Rank
- Medium
$602 total spent
4 hires
6 jobs posted
67% hire rate,
open job
4.85
of 3 reviews
|
||
OpenTelemetry Expert with Cloudflare Workers and Effect.ts Needed
I'm facing a challenging issue involving OpenTelemetry tracing from Cloudflare Workers and need help from an expert. I've created a minimal, clear reproduction here: https://github.com/Fawwaz-2009/effect-otel-cf-worker-debug/tree/main The Issue - We have two identical OpenTelemetry setups using Effect.ts (`@effect/opentelemetry`): - **Node.js** implementation (working perfectly): Traces appear correctly in Grafana Tempo. - **Cloudflare Worker** implementation (issue): Traces successfully reach the OTLP collector but **do not** appear in Grafana Tempo. Ideal Candidate The perfect candidate will: - Have proven expertise with OpenTelemetry and Cloudflare Workers. - Be familiar with Effect.ts and specifically the recently updated `@effect/opentelemetry` package that manages its own exporting. - Demonstrate clear understanding by **solving this reproduction issue**—once you show that traces from the Cloudflare Worker reach Grafana, you'll be immediately hired for ongoing consultation. Bonus Opportunity You'll be prioritized and hired immediately if you can show that you managed to : - Set up auto-instrumentation for a Cloudflare Worker interacting with Durable Objects using [this package](https://github.com/evanderkoogh/otel-cf-workers). - Showcase advanced tracing setups like parent/child spans using Effect's parent tracer functionalities. Job Expectations - Immediate task: Fix the provided GitHub reproduction to ensure telemetry from Cloudflare Workers appears correctly in Grafana. - Ongoing consultation on OpenTelemetry, Cloudflare, and Effect.ts for long-term collaboration. Application Instructions When applying, **explicitly demonstrate your solution to the provided GitHub issue**. Generic or unrelated applications will be ignored. rate negotiable based on demonstrated expertise Looking forward to your expert insights and collaboration!
Skills: Cloudflare, Web Development, JavaScript, TypeScript
Hourly rate:
35 - 75 USD
3 days ago
|
|||||
MLOps Consultation
|
~436 - 872 USD | 3 days ago |
Client Rank
- Risky
3 open job
Registered at: 22/03/2022
|
||
Hello,
I need below resource who work on monthly basis. ONLY BID IF BUDGET AND JD MATCHES. Exp-4 to 8 years Job Responsibilities Create and maintain a scalable infrastructure to deliver AI/ML processes, responding to the user requests in near real time. Design and implement the pipelines for training and deployment of ML models. Design dashboards to monitor a system. Collect metrics and create alerts based on them. Design and execute performance tests. Perform feasibility studies/analysis with a critical point of view. Support and maintain (troubleshoot issues with data and applications). Develop technical documentation for applications, including diagrams and manuals. Working on many different software challenges always ensuring a combination of simplicity and maintainability within the code. Contribute to architectural designs of large complexity and size, potentially involving several distinct software components. Mentoring other engineers fostering good engineering practices across the department. Working closely with data scientists and a variety of end-users (across diverse cultures) to ensure technical compatibility and user satisfaction. Work as a member of a team, encouraging team building, motivation and cultivating effective team relations. Role Requirements E=essential, P=preferred P - Bachelor's degree in Computer Science or related field P - Master’s degree in data engineering or related E - Demonstrated experience and knowledge in Linux and Docker containers E - Demonstrated experience and knowledge in some of the main cloud providers (Azure, GCP or AWS) P - Demonstrated experience and knowledge in distributed systems E - Proficient in programming languages: Python E – Experience with ML/Ops technologies like Azure ML E – Self driving and good communication skills P – Experience with AI/ML frameworks: Torch, Onnx, Tensorflow E - Experience designing and implementing CICD pipelines for automation. P - Experience designing monitoring dashboards (Grafana or similar) P - Experience with container orchestrators (Kubernetes, Docker Swarm. E - Experience in using collaborative developing tools such as Git, Confluence, Jira, etc. E - Problem-solving capabilities. E - Strong ability to analyze and synthesize. (Good analytical and logical thinking capability) E - Proactive attitude, resolutive, used to work in a team and manage deadlines. E - Ability to learn quickly E - Agile methodologies development (SCRUM/KANBAN). E - Minimal work experience of 6 years with evidence. E - Ability to keep fluid communication written and oral in English, both written and spoken Skills: SQL, UML Design, Oracle, Database Administration, Database Development
Fixed budget:
37,500 - 75,000 INR
3 days ago
|
|||||
Senior DevOps Engineer for Kubernetes Consultancy
|
50 - 100 USD
/ hr
|
3 days ago |
Client Rank
- Risky
|
||
Senior DevOps Engineer - Remote Kubernetes Infrastructure
Who We Are ------------------ A lean, fully remote team of infrastructure engineers specializing in baremetal Kubernetes. We help companies migrate performance-critical workloads from the cloud to optimized on-prem clusters. No corporate fluff - just engineers solving hard problems for engineers. What You'll Do ------------------ - Lead migrations of production workloads from AWS/GCP/Azure to our baremetal Kubernetes clusters - Design and harden: * CNI networking (Cilium, Calico) on physical hardware * Persistent storage stacks (OpenEBS/Mayastor) without cloud dependencies * Choose and deploy services & operators for the client's engineers - Implement infrastructure-as-code (Ansible, Terraform) and GitOps workflows - Partner directly with client engineering teams to troubleshoot and optimize deployments Need-to-Have (Senior-Level) ---------------------------- [*] Comfortable leading client calls to diagnose issues and propose solutions [*] 5+ years building/maintaining production Kubernetes clusters (admin certifications preferred) [*] A long history of general tech & geekery [*] Hands-on experience migrating stateful workloads [*] Fluency in: - CNIs such as Cilium - Observability stacks (Prometheus/Grafana, OpenTelemetry) - Layer 2 & 3 networking [*] Self-directed - you thrive in remote teams with async communication Why Join? ------------------ - Work on infrastructure where performance metrics *actually matter* (no abstracted cloud limits) - Direct client partnerships - no layers between you and the engineering teams you support - Spin-out open source projects where we can - Flexible hours - we prioritize clear docs over meetings
Skills: Linux, Kubernetes, DevOps, Docker, CI/CD Platform, Ansible, Linux System Administration
Hourly rate:
50 - 100 USD
3 days ago
|
|||||
Experience DevOps Consultant- AWS required
|
not specified | 3 days ago |
Client Rank
- Excellent
$1 254 756 total spent
99 hires
188 jobs posted
53% hire rate,
open job
4.79
of 51 reviews
|
||
Objective:
Seeking a seasoned AWS DevOps Engineer to design and manage scalable, secure cloud infrastructure with a focus on Kubernetes, CI/CD automation, and containerization. The role involves building and maintaining production-grade environments using modern DevOps tools and practices. Requirements: 5+ years of hands-on experience with AWS services and DevOps Strong experience with Kubernetes in production Proficiency with Docker, Terraform, Jenkins, GitLab, CircleCI Solid grasp of CI/CD, Infrastructure as Code, and automation Experience with monitoring/logging tools (Prometheus, Grafana, ELK) Strong knowledge of networking, DNS, VPNs, load balancing Experience with secure cloud architecture
Skills: Kubernetes, Amazon Web Services, DevOps, CI/CD Platform, Deployment Automation
Budget:
not specified
3 days ago
|
|||||
Looking for a DevOps Engineer to join our Team (with experience in TradeFi)
|
not specified | 3 days ago |
Client Rank
- Medium
$228 total spent
3 hires
7 jobs posted
43% hire rate,
open job
|
||
Job Description:
We are looking for an experienced and highly motivated DevOps Engineer to join our team. In this role, you will collaborate with a team of engineers to design, implement, and optimize cloud infrastructure. You will be responsible for automating and maintaining critical infrastructure that powers our trading and investment platforms, ensuring scalability, security, and performance in a highly dynamic and regulated environment. Key Responsibilities: ● Architect, develop, and maintain scalable, secure, and high-performance cloud-based infrastructure on AWS. ● Work closely with software engineering teams to optimize deployment processes, CI/CD pipelines, and software delivery strategies. ● Automate system provisioning, configuration management, and application deployment for reliability and scalability. ● Monitor, troubleshoot, and improve system performance, proactively addressing potential bottlenecks and security vulnerabilities. ● Implement advanced monitoring and logging solutions (Prometheus, Grafana, ELK stack) to ensure observability and rapid incident response. ● Enhance infrastructure automation and configuration management with tools like Ansible, Chef, or Puppet. ● Optimize AWS resource utilization for performance and cost-efficiency while ensuring high availability and redundancy. ● Work within a regulated financial environment, ensuring compliance with security policies, governance, and industry standards. Required Qualifications: ● Bachelor’s degree in a STEM field (Computer Science, Engineering, Mathematics, or related discipline). ● 5+ years of experience in DevOps, cloud infrastructure, or site reliability engineering. ● Extensive experience managing AWS environments with best practices in security and scalability. ● Deep understanding of CI/CD tools such as Jenkins, GitLab CI/CD, and AWS CodePipeline. ● Strong expertise in maintaining and automating infrastructure and deployment processes. ● Advanced scripting and automation skills in Python, Bash, or similar languages. ● Hands-on experience with containerization and orchestration technologies, including Docker and Kubernetes. ● Expertise in designing and implementing infrastructure-as-code solutions using Terraform, Pulumi, CloudFormation, or similar tools. ● Hands-on experience with SST.dev for cloud application development and deployment. ● Strong familiarity with monitoring, logging, and alerting frameworks for proactive system management. ● Experience in automating security compliance, identity management, and network security best practices. Bonus Points: ● Previous experience in hedge funds, financial services, or trading technology. ● Expertise in compliance, regulatory frameworks, and security practices in financial technology. ● Experience with AI/ML development tools, including Cursor, GitHub Copilot, or similar, as well as other cloud-based machine learning infrastructure. Benefits: ● Competitive compensation package, including performance-based bonuses. ● Exposure to cutting-edge financial technology and innovative trading systems. ● Opportunities for professional growth, leadership, and continued learning in a high-impact role. ● Flat organizational structure with direct impact on business outcomes. ● Flexible remote work options for exceptional candidates.
Skills: Git, Finance & Accounting, Amazon Web Services, Terraform, DevOps, Docker, CI/CD
Budget:
not specified
3 days ago
|
|||||
ERPNext Architecture Lead
|
11 - 12 USD
/ hr
|
3 days ago |
Client Rank
- Medium
4 jobs posted
open job
|
||
We are seeking a highly experienced ERPNext Architecture Lead to design, implement, and manage scalable ERPNext solutions across the organization. The ideal candidate will possess deep expertise in the Frappe Framework, ERP process design, server infrastructure, and integration strategies. This role involves leading technical architecture, managing system performance and hosting environments, mentoring junior developers, and acting as the primary ERP technical advisor.
Key Responsibilities: 🧱 Architecture & Design Define and maintain the technical architecture of ERPNext deployments (monolith or microservice extensions) Design multi-company, multi-site ERPNext environments for performance and modularity Lead scalability, security, and high availability initiatives for ERPNext hosting Architect data models, workflows, and custom Doctypes to align with business processes 🔧 Implementation, Server & Development Oversight Oversee server infrastructure, including ERPNext installations, updates, MariaDB/Redis configurations, backups, and log rotation Guide development teams in customizing ERPNext using Frappe (Python, JS, Jinja2) Set and enforce best practices for code structure, Git usage, testing, and deployment Implement and manage DevOps pipelines (Docker, CI/CD), server monitoring, and access control 📡 Integration & API Strategy Design and implement REST API, webhooks, or middleware-based integration layers with 3rd-party platforms (e.g., CRM, eCommerce, IoT, POS) Utilize Redis, Celery, or message brokers for asynchronous job handling where required 👥 Team Leadership & Collaboration Collaborate with Business Analysts, Functional Consultants, and Department Heads Translate business needs into technical specs, prototypes, and solution designs Provide technical leadership and training to junior ERP developers Review junior code, guide development tasks, and mentor them on best practices 📊 Monitoring & Optimization Implement logging, monitoring (Netdata/Grafana), and performance dashboards Conduct audits on system load, query efficiency, and long-term scalability Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, or related field 5+ years of experience in ERP/CRM architecture roles (preferably open-source ERP) 3+ years hands-on with ERPNext and Frappe Framework Strong technical knowledge in: Python, JavaScript (Vue preferred), MariaDB, Redis, REST APIs Ubuntu/Debian server environments and ERPNext production deployment Experience in managing ERPNext server environments, including database tuning and high-availability setups Familiar with Git, Docker, CI/CD tools like GitLab CI or Jenkins Preferred Qualifications: ERPNext Developer Certification (Frappe Technologies) Experience integrating IoT, POS, or eCommerce systems with ERPNext Exposure to machine learning/AI integrations for reporting or automation Familiarity with industry verticals such as manufacturing, retail, or distribution Soft Skills: Leadership and mentorship capabilities Strong troubleshooting and systems thinking Clear and structured communication skills Passion for open-source ERP and team empowerment
Skills: ERPNext, Python, API, Software Architecture & Design, DevOps
Hourly rate:
11 - 12 USD
3 days ago
|
|||||
Senior Software Engineer (Django, DevOps & Cloud)
|
300 USD | 3 days ago |
Client Rank
- Good
$3 000 total spent
1 hires
2 jobs posted
50% hire rate,
open job
5.00
of 1 reviews
|
||
We are looking for a highly skilled Senior Software Engineer with deep expertise in Django, DevOps, and Cloud Infrastructure to help architect, optimize our backend systems, CI/CD pipelines, and cloud deployments for a dating app project
Key Responsibilities: ✅ Django Architecture & Development – Design, optimize, and scale Django-based backend systems. ✅ DevOps & CI/CD Pipelines – Build and maintain robust CI/CD pipelines for Django and Flutter applications (GitHub Actions, GitLab CI, or similar). ✅ Cloud Infrastructure (AWS/GCP/Azure) – Manage and automate cloud deployments (ECS/EKS, Kubernetes, Terraform, etc.). ✅ Monitoring & Performance – Implement logging, monitoring (Prometheus/Grafana), and security best practices. ✅ Collaboration – Work closely with frontend (Flutter) and backend dev to ensure seamless integration. Required Skills: ✔ Strong Python & Django (5+ years) – REST APIs, ORM, performance tuning. ✔ DevOps & CI/CD – Docker, Kubernetes, GitHub Actions, Jenkins, or similar. ✔ Cloud Platforms – AWS (preferred)(EC2, ECS, RDS, S3, Lambda, etc.). ✔ Flutter (Bonus) – Experience with Flutter app deployments is a plus. ✔ Problem-Solving – Ability to debug complex issues and optimize performance. If you’re a senior engineer with strong Django, DevOps, and cloud skills, please submit your proposal with: Relevant experience (especially Django + cloud projects). Examples of past CI/CD pipelines or infrastructure work.
Skills: Git, Docker, CI/CD, DevOps
Fixed budget:
300 USD
3 days ago
|
|||||
Advanced Solana Memecoin Sniping Bot Developer Needed
|
not specified | 3 days ago |
Client Rank
- Medium
1 jobs posted
open job
|
||
We are seeking a highly skilled, experienced developer to design and build an advanced, rules-based Solana memecoin sniping bot. This bot will monitor real-time on-chain data to detect bundled wallet buy patterns—a common precursor to rug pulls—and execute rapid trades. Upon identifying suspicious bundled wallet activity, the bot must validate tokens via RugCheck.xyz and, if criteria are satisfied, enter positions within 5 seconds of launch. Critically, the system must continuously monitor for sell signals from those same bundled wallets and execute pre-signed, prioritized exit orders to safeguard profits before a rug pull occurs.
Responsibilities: Architecture & System Design: Design a high-performance, low-latency trading system on the Solana blockchain with real-time WebSocket data feeds. Architect an efficient data pipeline using in-memory message brokers (e.g., Redis Streams or Kafka) and optimized processing pipelines. Bundled Wallet Detection & Validation: Develop and implement robust rules-based logic to detect bundled wallet buys within a short (1–3 second) time window. Integrate with RugCheck.xyz to validate token safety based on criteria such as liquidity locks, renounced mint/freeze authorities, and contract audit status. Entry & Exit Mechanism Development: Build a pre-signed transaction system that allows for entry orders to be broadcast within 5 seconds of launch. Implement real-time monitoring modules that track flagged wallet activity to immediately trigger exit orders when sell signals are detected. Optimize transaction execution using techniques such as fee bumping, atomic transaction bundling, and direct validator communications to minimize latency. Optimization and Testing: Write critical modules in low-level languages (e.g., Rust or C/C++) for performance-critical paths. Create rigorous backtesting and adversarial simulation frameworks using historical data and stress test scenarios (e.g., flash sell events). Develop dashboards and logging systems (using Grafana or similar) for real-time performance tracking and risk management. Documentation and Collaboration: Provide detailed documentation for system architecture, algorithms, and deployment instructions. Collaborate with the project lead and other team members to iteratively improve the system based on live testing feedback. Qualifications: Experience: 5+ years of professional software development experience, with a strong background in low-latency systems. Proven track record in blockchain development, especially on the Solana network. Experience building high-frequency or algorithmic trading bots. Technical Skills: Proficiency in Rust, C/C++, or similar low-level programming languages. Strong experience with JavaScript/TypeScript for blockchain interfacing (e.g., using @solana/web3.js). Familiarity with real-time data streaming and message brokers (Redis, Kafka). Experience integrating third-party APIs, such as RugCheck.xyz or similar token audit/safety tools. Knowledge of blockchain microstructure, order flow analysis, and transaction prioritization techniques. Additional Skills: Ability to design and implement automated risk management systems (dynamic position sizing, adaptive trailing stops). Experience with backtesting frameworks and adversarial simulation in trading environments. Strong problem-solving abilities and keen attention to detail. Bonus Points: Prior experience in developing MEV-related strategies or front-running prevention methods on blockchains. Contributions to open-source blockchain projects or active participation in crypto trading communities.
Skills: API, PHP, Node.js, Python, C++, Rust, Blockchain, NFT & Cryptocurrency
Budget:
not specified
3 days ago
|
|||||
Senior DevOps Engineer for Kubernetes Consultancy (Oceana)
|
50 - 100 USD
/ hr
|
3 days ago |
Client Rank
- Medium
1 jobs posted
open job
|
||
Senior DevOps Engineer - Remote Kubernetes Infrastructure
Who We Are ------------------ A lean, fully remote team of infrastructure engineers specializing in baremetal Kubernetes. We help companies migrate performance-critical workloads from the cloud to optimized on-prem clusters. No corporate fluff - just engineers solving hard problems for engineers. What You'll Do ------------------ - Lead migrations of production workloads from AWS/GCP/Azure to our baremetal Kubernetes clusters - Design and harden: * CNI networking (Cilium, Calico) on physical hardware * Persistent storage stacks (OpenEBS/Mayastor) without cloud dependencies * Choose and deploy services & operators for the client's engineers - Implement infrastructure-as-code (Ansible, Terraform) and GitOps workflows - Partner directly with client engineering teams to troubleshoot and optimize deployments Need-to-Have (Senior-Level) ---------------------------- [*] Comfortable leading client calls to diagnose issues and propose solutions [*] 5+ years building/maintaining production Kubernetes clusters (admin certifications preferred) [*] A long history of general tech & geekery [*] Hands-on experience migrating stateful workloads [*] Fluency in: - CNIs such as Cilium - Observability stacks (Prometheus/Grafana, OpenTelemetry) - Layer 2 & 3 networking [*] Self-directed - you thrive in remote teams with async communication Why Join? ------------------ - Work on infrastructure where performance metrics *actually matter* (no abstracted cloud limits) - Direct client partnerships - no layers between you and the engineering teams you support - Spin-out open source projects where we can - Flexible hours - we prioritize clear docs over meetings Find out more and contact us at lithus.eu
Skills: Linux, Kubernetes, DevOps, Docker, CI/CD Platform, Ansible, Linux System Administration
Hourly rate:
50 - 100 USD
3 days ago
|
|||||
Full-Stack Machine Learning Engineer – AI Call Center App (MLOps, DevOps, Frontend & Backend)
|
1,000 USD | 2 days ago |
Client Rank
- Medium
$950 total spent
2 hires
1 jobs posted
100% hire rate,
open job
5.00
of 2 reviews
|
||
We are building a next-generation AI-powered call center platform for clients across the US and Canada, designed to automate workflows, enhance agent performance, and deliver real-time intelligence using ML.
We're seeking a Full-Stack ML Engineer who can take ownership of the end-to-end development and deployment of our application—including the machine learning system, backend infrastructure, frontend interface, and data pipeline integration. You'll work on everything from deploying ML models for voice analytics to building intuitive dashboards for call center agents. This role combines MLOps, DevOps, Data Engineering, Frontend, and Backend responsibilities and is ideal for an engineer who wants to own complex systems from start to scale. Key Responsibilities: Machine Learning / MLOps: Build and deploy ML models for voice transcription, sentiment analysis, lead scoring, and real-time recommendations Implement MLOps workflows: data versioning (DVC), experiment tracking (MLflow), automated retraining, and performance monitoring Use Docker/Kubernetes for scalable model serving via Seldon, BentoML, or FastAPI-based endpoints Backend Engineering: Develop RESTful APIs and real-time WebSocket services (Node.js, Python/FastAPI, or similar) Integrate with cloud telephony providers (Twilio, Plivo, etc.) Ensure security, authentication (OAuth/JWT), and GDPR/PIPEDA/CCPA compliance Manage serverless or container-based deployment in AWS/GCP/Azure Database & Data Engineering: Design scalable database schemas (PostgreSQL, MongoDB, or DynamoDB) to store leads, agent logs, recordings, and call summaries Build batch and streaming pipelines for real-time processing (Kafka, Spark, Airflow) Maintain clean data workflows for ML training and CRM analytics Frontend Development: Build responsive dashboards and agent interfaces (React.js or Vue.js) Integrate live call transcription, AI suggestions, and performance stats Implement notification systems and alerting for agent feedback Requirements: 3+ years experience in full-stack development with ML/AI components Strong programming skills in Python, JavaScript (Node.js, React.js), or similar Experience deploying ML systems using MLOps pipelines and DevOps automation Solid grasp of relational + NoSQL databases, API development, and CI/CD workflows Familiar with cloud infrastructure and services (AWS/GCP/Azure) Nice to Have: Experience with telephony APIs (Twilio, Vonage, Five9, etc.) Knowledge of speech-to-text systems and real-time audio pipelines Understanding of compliance frameworks for US & Canadian markets (TCPA, PIPEDA, HIPAA, etc.) Experience with real-time dashboards and analytics tools (Grafana, Kibana, or Chart.js)
Skills: API, Amazon Web Services, Java, JavaScript, MongoDB
Fixed budget:
1,000 USD
2 days ago
|
|||||
vLLM Support Engineer – Handle L2/L3 Tickets
|
10 - 60 USD
/ hr
|
2 days ago |
Client Rank
- Medium
83 jobs posted
open job
|
||
🚀 vLLM Expert Needed – L2/L3 Support for AI Inference Engine
Part-time, flexible – support-based role We’re looking for an experienced vLLM engineer to support clients using this blazing-fast inference engine for large language models. If you enjoy solving real-world issues in production environments and working with cutting-edge AI infrastructure — this is your chance! ✅ What You’ll Be Doing: - Handle L2/L3 support tickets for vLLM - Troubleshoot and resolve issues with model serving, latency and performance - Collaborate with clients and internal teams to deliver clear, effective solutions - Communicate professionally in English (written and spoken) 💡 What We’re Looking For: - Deep hands-on experience with vLLM - Strong debugging and problem-solving skills - Experience supporting open-source technologies in production - Clear and professional English communication 🧠 Good to Know (Bonus Skills): Model Serving & Generative AI: - MLflow, LangChain, LlamaIndex, Hugging Face, OpenAI GPT, LLama 2, ChromaDB, QdrantDB, Pinecone, Milvus DevOps & Infrastructure: - Docker, Kubernetes, Argo CD, Vault, NGINX, Traefik Monitoring & Logging: - Prometheus, Grafana, Sentry Machine Learning & Notebooks: - TensorFlow in Jupyter, PyTorch in Jupyter, JupyterLab 🕒 This is not a full-time job – work is as-needed, based on incoming support requests. Great for freelancers or part-time experts. 🚀 Apply now with a short note about your experience with vLLM or similar tools. Let’s keep LLMs fast, stable and production-ready!
Skills: Troubleshooting, Technical Support, Incident Management
Hourly rate:
10 - 60 USD
2 days ago
|
|||||
Necesitamos un DevOps especialista en OVH Cloud
|
30 - 250 USD | 2 days ago |
Client Rank
- Excellent
$38 936 total spent
46 hires
, 2 active
4 open job
5.00
of 5 reviews
Registered at: 15/10/2022
|
||
Necesitamos un DevOps arquitecto en la nube, con años de experiencia. Su experiencia debe ser comprobable en OVH Cloud.
Requisitos: Habilidades técnicas..... Gestión de infraestructura: Conocimiento en herramientas como Terraform, Ansible y Kubernetes. Automatización: Dominio de scripts y lenguajes como Bash, Python y PowerShell. Control de versiones: Experiencia con Git y plataformas como GitHub o GitLab. Monitoreo y observabilidad: Manejo de Prometheus, Grafana y herramientas de logging como ELK Stack. Seguridad: Implementación de políticas de seguridad y gestión de acceso (IAM, Firewalls, Zero Trust). Habilidades blandas Colaboración: Trabajo en equipo con desarrolladores, operadores y stakeholders. Comunicación efectiva: Explicar conceptos técnicos a equipos no técnicos. Resolución de problemas: Detectar y solucionar incidentes de manera rápida y eficiente. Mentalidad de mejora continua: Búsqueda constante de optimización de procesos. Con preferencia, sea Cristiano. Le esperamos. Skills: System Admin, Linux, Software Architecture, PostgreSQL, Ubuntu
Fixed budget:
30 - 250 USD
2 days ago
|
|||||
We need DevOps, whit expiring OVH Cloud
|
30 USD | 2 days ago |
Client Rank
- Medium
$85 total spent
5 hires
59 jobs posted
8% hire rate,
open job
5.00
of 2 reviews
|
||
We are looking for a DevOps cloud architect with years of experience. Proven experience with OVH Cloud is required.
Requirements: Technical Skills... Infrastructure Management: Knowledge of tools such as Terraform, Ansible, and Kubernetes. Automation: Proficiency in scripts and languages such as Bash, Python, and PowerShell. Version Control: Experience with Git and platforms such as GitHub or GitLab. Monitoring and Observability: Knowledge of Prometheus, Grafana, and logging tools such as ELK Stack. Security: Implementation of security policies and access management (IAM, Firewalls, Zero Trust). Soft Skills: Collaboration: Teamwork with developers, operators, and stakeholders. Effective Communication: Explain technical concepts to non-technical teams. Troubleshooting: Detect and resolve incidents quickly and efficiently. Continuous improvement mentality: Constantly seeking process optimization. Preferably, be a Christian. We look forward to seeing you.
Skills: DevOps, Deployment Automation, CI/CD
Fixed budget:
30 USD
2 days ago
|
|||||
Thanos Support Engineer - L2/L3 Monitoring & Observability
|
10 - 60 USD
/ hr
|
2 days ago |
Client Rank
- Medium
83 jobs posted
open job
|
||
🚀 Thanos Expert Needed – L2/L3 Support for Scalable Monitoring
Part-time, flexible – support-based role We’re on the hunt for a seasoned Thanos engineer to support our observability stack built on Prometheus and Thanos. If you love diving into distributed systems, making metrics scalable and highly available, and solving real-world infrastructure issues — we want to hear from you! ✅ What You’ll Be Doing: - Handle L2/L3 support tickets related to Thanos - Troubleshoot and resolve issues with query performance, storage, and availability - Assist clients with integrating and scaling Thanos with Prometheus - Communicate effectively with both technical and non-technical stakeholders 💡 What We’re Looking For: - Solid hands-on experience with Thanos (Sidecar, Querier, Store, Compactor, etc.) - Strong understanding of Prometheus and long-term metrics storage - Great debugging skills and a passion for clean observability - Clear and professional communication in English 🧠 Good to Know (Bonus Skills): Monitoring, Logging & Observability: - Prometheus, Grafana, Alertmanager, Jaeger, Kibana, ELK, Zabbix, Sentry DevOps & Infrastructure: - Docker, Kubernetes, Helm, Argo CD, HashiCorp Vault, Traefik, NGINX, HAProxy, etcd Databases & Storage: - PostgreSQL, InfluxDB, MinIO, Redis Security & Identity: - Keycloak, Vaultwarden, OpenLDAP Machine Learning & Data Science: - MLflow 🕒 This is not a full-time job – work is as-needed, based on incoming support requests. Perfect for freelancers or part-time observability experts. 🚀 Apply now with a brief note about your experience with Thanos or similar observability stacks. Let’s keep monitoring fast, scalable and reliable!
Skills: Technical Support, Troubleshooting
Hourly rate:
10 - 60 USD
2 days ago
|
|||||
Custom XRPL Sidechain Deployment – Looking for Blockchain Developer (Hooks + Bridge)
|
5,500 USD | 2 days ago |
Client Rank
- Excellent
$66 224 total spent
25 hires
32 jobs posted
78% hire rate,
open job
4.99
of 20 reviews
|
||
**Upwork Brief: XRPL Sidechain Development Request**
**Project Overview:** We are looking for a skilled XRPL blockchain developer or team to assist with the setup and deployment of a custom XRPL sidechain. This project is intended as the core infrastructure for a next-generation Web3 application, but the specific business model and token economy will be shared only after NDA execution. At this stage, we are gathering accurate quotes and technical capability assessments. **Scope of Work:** 1. **XRPL Sidechain Deployment** - Deploy a fully functional XRPL sidechain using `rippled` → https://xrpl.org/install-rippled.html - Configure genesis block and network ID → https://xrpl.org/run-rippled-as-a-validator.html#network-id-and-validator-token - Use DigitalOcean or similar hosting infrastructure (cost-optimized) → https://www.digitalocean.com/pricing/droplets - Sidechain should support smart logic via XRPL Hooks → https://hooks.xrpl.org/ 2. **Validator & Node Setup** - Setup of at least 1 validator node (expandable to 5 in future) → https://xrpl.org/run-rippled-as-a-validator.html - Optional: Bridge node and observer/API node configuration - Basic peer discovery, UNL setup, and ledger synchronization → https://xrpl.org/unl.html 3. **Bridge to XRPL Mainnet** - Implement XLS-38d standard bridge from XRPL mainnet to the custom sidechain → https://github.com/XRPLF/XRPL-Standards/discussions/67 - Create door accounts for token mint/burn flow between chains → https://xrpl.org/xrp-ledger-sidechains.html#door-accounts - Test basic bridged asset flow (XRP or a custom issued token) 4. **Hooks Integration (V2)** - Deploy XRPL Hooks (v2) to enable programmable logic on the sidechain → https://hooks-builder.xrpl.org/ - Example use cases: subscription enforcement, payout splitting, transaction-based triggers 5. **Infrastructure Monitoring** - Setup of uptime monitoring (Grafana, UptimeRobot, or similar) → https://www.digitalocean.com/community/tutorials/how-to-install-grafana-on-ubuntu-22-04 - Logging and error alerts for validator/node status 6. **Security and Scalability** - Basic DDOS protection and firewall configurations - Configurable validator set and multi-region node deployment options 7. **Documentation & Handoff** - Clear documentation for node management and bridge operations - Guidance for expanding validator count, re-syncing nodes, and testing Hooks **Development Timeline:** The goal is to complete core development in **3 months**, followed by **1 month of testing and debugging**. Suggested timeline is as follows: - **Month 1:** - Sidechain and validator setup - Initial network configuration and genesis deployment - Begin bridge node setup and mainnet connectivity - **Month 2:** - Finalize bridge logic and token transfer flow (XLS-38d) - Deploy and test core XRPL Hooks (stream logic, access triggers, payout split) - Observer/API node deployment - **Month 3:** - Infrastructure hardening (DDOS protections, firewall, multi-region nodes) - Add monitoring stack (Grafana or similar) - Finalize documentation and handoff materials - **Month 4:** - Full system test, debugging, load testing - Optimize Hooks and bridge flow - Validate that all infrastructure components run reliably **Requirements:** - Demonstrated experience with `rippled`, XRPL Hooks, and sidechain deployments - Familiarity with XLS-38d and XRPL standards - Ability to configure and manage nodes via DigitalOcean or similar cloud environments - Good communication and reporting practices **Optional Bonus Skills:** - Experience with tokenomics or NFT standards (XLS-20) → https://xrpl.org/non-fungible-tokens.html - AMM or liquidity pool knowledge (XLS-30d) → https://github.com/XRPLF/XRPL-Standards/discussions/69 - Performance benchmarking and testing under simulated load **Deliverables:** - Fully operational XRPL sidechain with 1 validator (minimum) - Functional mainnet bridge via XLS-38d - One or more deployed Hooks (example logic) - Documentation and support for expansion **Next Steps:** Please respond with: - Your experience related to XRPL or sidechain development - Estimated timeline for delivery - Rough cost quote based on above scope - Any questions you need answered to refine your estimate We look forward to identifying a partner with strong knowledge of XRPL internals, Hooks deployment, and sidechain customization. Full project details to be shared with selected candidates after NDA.
Skills: Laravel, Rust, DevOps, Docker, Node.js, JavaScript, Python, C++, Websockets
Fixed budget:
5,500 USD
2 days ago
|
|||||
We need a DevOps specialist at OVH Cloud
|
10 - 15 USD
/ hr
|
2 days ago |
Client Rank
- Medium
$120 total spent
6 hires
63 jobs posted
10% hire rate,
open job
5.00
of 4 reviews
|
||
We are looking for a DevOps cloud architect with years of experience. Proven experience with OVH Cloud is required.
Requirements: Technical Skills... Infrastructure Management: Knowledge of tools such as Terraform, Ansible, and Kubernetes. Automation: Proficiency in scripts and languages such as Bash, Python, and PowerShell. Version Control: Experience with Git and platforms such as GitHub or GitLab. Monitoring and Observability: Knowledge of Prometheus, Grafana, and logging tools such as ELK Stack. Security: Implementation of security policies and access management (IAM, Firewalls, Zero Trust). Soft Skills: Collaboration: Teamwork with developers, operators, and stakeholders. Effective Communication: Explain technical concepts to non-technical teams. Troubleshooting: Detect and resolve incidents quickly and efficiently. Continuous improvement mentality: Constantly seeking process optimization. Preferably, spanish speaking. We look forward to seeing you.
Skills: Ubuntu, Linux System Administration, PostgreSQL
Hourly rate:
10 - 15 USD
2 days ago
|
|||||
OpenStack Administrator
|
13 - 14.06 USD
/ hr
|
2 days ago |
Client Rank
- Medium
64 jobs posted
open job
|
||
OpenStack Administrator
Experience Required: 4+ Years Location: Remote 🔍 Job Summary: We are seeking a skilled OpenStack Administrator with at least 4 years of experience in managing OpenStack environments. The ideal candidate should have hands-on experience with major OpenStack modules, strong Linux administration skills, scripting capabilities in Python, and familiarity with container orchestration using Kubernetes. Experience working with Azure, REST APIs, and virtualization technologies like KVM is also required. 🛠️ Key Responsibilities: Manage and administer OpenStack infrastructure (Nova, Neutron, Keystone, Glance, Cinder). Deploy, configure, and troubleshoot OpenStack clusters in production environments. Write and maintain Python scripts to automate system tasks and workflows. Interact with RESTful APIs using tools like cURL or Python for automation and integration. Manage Linux-based systems to support OpenStack infrastructure. Monitor and optimize system performance, reliability, and security. Troubleshoot system failures and resolve issues across the OpenStack stack. Work with Kubernetes for container orchestration and application deployment. Collaborate with DevOps and cloud teams to support hybrid environments (OpenStack and Azure). Maintain proper documentation and ensure compliance with best practices. ✅ Required Skills: Proficiency in OpenStack modules: Nova, Neutron, Keystone, Glance, Cinder Strong Linux administration skills Scripting experience in Python Experience working with RESTful APIs via tools like cURL and Python Good understanding of KVM and virtualization technologies Knowledge of Kubernetes for container management Exposure to Azure or hybrid cloud environments 💡 Preferred Skills: Familiarity with cloud automation tools (Ansible, Terraform) Knowledge of monitoring tools (Prometheus, Grafana) Experience working in Agile/Scrum environments
Skills: OpenStack, Kubernetes, Scripting Language
Hourly rate:
13 - 14.06 USD
2 days ago
|
|||||
Grafana Report Creation from Design
|
~34 - 284 USD | 1 day ago |
Client Rank
- Excellent
$108 094 total spent
79 hires
, 12 active
3 open job
4.99
of 16 reviews
Registered at: 09/09/2021
|
||
I need a skilled professional to build a Grafana report based on a design I will provide. The report needs to connect to a SQL server as its data source, and incorporate specific visualizations as depicted in the attached image.
Key Requirements: - Connect the Grafana report to a SQL server - Implement visualizations as per the design picture - Enable filtering capabilities within the report Ideal Skills: - Expertise in Grafana - Experience with SQL server - Ability to interpret and implement visual designs - Proficiency in creating interactive reports with filtering options Please review the attached design before bidding. Skills: SQL, Business Intelligence
Fixed budget:
30 - 250 EUR
1 day ago
|
|||||
DevOps Engineer (Contract) - Ongoing support and deployment
|
6 - 10 USD
/ hr
|
1 day ago |
Client Rank
- Risky
|
||
We’re looking for a DevOps Engineer who thrives on automating workflows, improving reliability, and making engineering teams move faster with confidence. You’ll work across infra, CI/CD, observability, and security to build resilient, scalable systems.
You believe infrastructure should be treated as code, logs should be searchable in seconds, and deployments should be stress-free. If you love building clean pipelines, managing infra-as-code, and working with developers to ship with speed and stability — this role is for you. Details Role- DevOps Engineer (Contract) Location- Remote (India) Ideal start date - Immediate/Asap Compensation - To be discussed What you’ll do Set up and maintain CI/CD pipelines across services and environments Monitor system health and set up alerts/logs for performance & errors Work closely with backend/frontend teams to improve deployment velocity Manage cloud environments (staging, production) with cost and reliability in mind Ensure secure access, role policies, and audit logging Contribute to internal tooling, CLI automation, and dev workflow improvements Qualifications Must-Haves: 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP) Proficiency in writing scripts (Bash, Python) for automation Good understanding of system monitoring, logs, and alerting Strong debugging skills, ownership mindset, and clear documentation habits Infra monitoring tools like Grafana dashboards Nice-to-Have: Experience with Kubernetes, serverless architectures, or cost optimization Exposure to security best practices (secrets, encryption, IAM roles) GitHub CLI, Makefiles, or internal tooling experience Startup or fast-paced environment background Perks Ownership over infrastructure and automation from Day 1 Opportunity to build high-availability systems in a product-led team Work closely with engineering + product to unblock releases and scale infrastructure
Skills: DevOps, Docker, Amazon Web Services, CI/CD, Microservice, Git, Python, NGINX
Hourly rate:
6 - 10 USD
1 day ago
|
|||||
Registrar system + dropcatch domain system + auction platform
|
not specified | 1 day ago |
Client Rank
- Medium
2 jobs posted
open job
|
||
🚀 Detailed Project Brief – Custom AFNIC Registrar Infrastructure + Domain Dropcatch Platform (Similar to WebExpire/KifDom) + Advanced Domain Sniping (RUSH Server)
📌 Introduction We’re looking for an expert-level development team or individual to build a comprehensive AFNIC-accredited domain registrar infrastructure combined with a custom-built, high-performance domain drop-catching platform, inspired by (but not identical to) platforms such as WebExpire.fr and KifDom.com. This project also requires the development of an optimized domain-sniping module that interacts specifically with AFNIC’s RUSH EPP server to maximize capture rates for expiring domains. ⚠️ Important: The platform developed must be original and not violate any intellectual property belonging to WebExpire.fr, KifDom.com, or any third party. These platforms are mentioned solely as references for desired functionality and market positioning. 📌 1. Technical Backend – AFNIC Accredited Registrar Develop a backend compliant with AFNIC’s accreditation requirements, including: Complete implementation of EPP Protocol (RFC5730). Mutual TLS authentication (X.509 certificates). GDPR-compliant WHOIS data management. Data escrow integration as per AFNIC requirements. Secure infrastructure with monitoring, backups, logging. 📌 2. Frontend Domain Dropcatch Platform Create a user-friendly, modern, and responsive frontend platform, inspired by the features of platforms like WebExpire.fr and KifDom.com (without directly copying design or branding): 🌐 Key Functionalities Required: Listing of expiring or recently expired domains (.fr mainly). Domain metrics display (Trust Flow, Citation Flow, backlinks count, age, pricing). Powerful real-time domain search & advanced filtering. Domain reservation or pre-order functionality (payment integration not required initially). Clear domain availability status indicators (Available, Reserved, Sold). 🎨 Custom Branding & Design: Unique visual identity provided by us (logo, color scheme, layout). Editable static pages (About, FAQ, Terms, Contact form). Optimized for SEO and responsive on all devices. 📌 3. Admin Back-office (Custom Management Panel) Implement a secure, robust admin backend panel allowing our team to easily manage domain inventory and website data: Secure authentication (2FA). Dashboard with domain statistics and status overview. Add, edit, remove domain listings manually. CSV bulk import/export for domains. Real-time domain status management (Available, Reserved, Sold). Admin activity logging & user tracking. 📌 4. Advanced Sniping Module (Optimized for AFNIC RUSH Server) Develop an advanced domain-sniping engine optimized specifically for AFNIC’s dedicated RUSH EPP server. AFNIC permits EPP requests at up to one request every 0.01 second (100 requests/sec), and the module must maximize this quota efficiently. 🎯 Domain Sniping Capabilities: Load a prioritized domain list (e.g., CSV format). Assign configurable priority levels per domain. Intelligent distribution of request quotas based on domain priority. Automatic reallocation of EPP request quotas: If a domain is successfully acquired, instantly redistribute resources to remaining domains. If a domain is lost (captured by competitor), immediately refocus resources to next available domain(s). Real-time dashboard indicating performance, active sniping status, and resource allocation. 🧠 Intelligent Efficiency: Ensure zero wasted requests (stop checking domains once status confirmed). Adaptive resource management ensuring continuous optimal quota usage. 📌 5. Technical Integration & Automation Integration between registrar backend (EPP) and frontend platform. Automated synchronization (minimum every 30–60 seconds) between backend and frontend. Immediate frontend updates on successful domain captures. 📌 6. Preferred Tech Stack (Suggested) Backend: Python (FastAPI or Django), Node.js, or Go Frontend: React or Vue.js (preferred), Tailwind CSS Database: PostgreSQL/MySQL Infrastructure: Dockerized setup on VPS/Bare Metal (Debian/Ubuntu) Monitoring & Logging: Prometheus/Grafana/Sentry 📌 7. Deployment & Accreditation Development and provision of a staging environment for AFNIC testing. Assistance through AFNIC accreditation technical tests. Production deployment and ongoing support post-launch. ✅ Summary of Deliverables: Component Description AFNIC Backend Complete EPP integration, secure infrastructure Frontend Platform Unique, modern design inspired by similar market platforms Admin Back-office Domain management, bulk imports, admin logs Sniping Module Priority-based, optimized AFNIC RUSH sniping module Integration Real-time synchronization and data exchange 📝 What You Must Include in Your Proposal: Overview of your technical strategy and approach. Demonstrated experience with AFNIC accreditation, domain sniping, or related high-frequency request integrations. Proposed technologies, frameworks, and architecture. Timeline & detailed milestones. Comprehensive budget breakdown per deliverable. 🚀 Budget and Timeline: Flexible budget (we value quality, reliability, and expert performance). Project ideally completed 30 - 45 days All documentation is online on afnic website ⚠️ Important Legal Note: This job strictly requires original, custom-built solutions. No direct copy or infringement of WebExpire.fr, KifDom.com, or any other platform is permitted. Platforms mentioned are provided solely as functional and conceptual references to clarify project requirements.
Skills: Web Development
Budget:
not specified
1 day ago
|
|||||
DevOps Engineer to join our Team (with experience in TradeFi) (Remote or Relocation to Dubai)
|
not specified | 1 day ago |
Client Rank
- Medium
$228 total spent
3 hires
16 jobs posted
19% hire rate,
open job
|
||
We are looking for an experienced and highly motivated DevOps Engineer to join our team. In this role, you will collaborate with a team of engineers to design, implement, and optimize cloud infrastructure. You will be responsible for automating and maintaining critical infrastructure that powers our trading and investment platforms, ensuring scalability, security, and performance in a highly dynamic and regulated environment.
Key Responsibilities: ● Architect, develop, and maintain scalable, secure, and high-performance cloud-based infrastructure on AWS. ● Work closely with software engineering teams to optimize deployment processes, CI/CD pipelines, and software delivery strategies. ● Automate system provisioning, configuration management, and application deployment for reliability and scalability. ● Monitor, troubleshoot, and improve system performance, proactively addressing potential bottlenecks and security vulnerabilities. ● Implement advanced monitoring and logging solutions (Prometheus, Grafana, ELK stack) to ensure observability and rapid incident response. ● Enhance infrastructure automation and configuration management with tools like Ansible, Chef, or Puppet. ● Optimize AWS resource utilization for performance and cost-efficiency while ensuring high availability and redundancy. ● Work within a regulated financial environment, ensuring compliance with security policies, governance, and industry standards. Required Qualifications: ● Bachelor’s degree in a STEM field (Computer Science, Engineering, Mathematics, or related discipline). ● 5+ years of experience in DevOps, cloud infrastructure, or site reliability engineering. ● Extensive experience managing AWS environments with best practices in security and scalability. ● Deep understanding of CI/CD tools such as Jenkins, GitLab CI/CD, and AWS CodePipeline. ● Strong expertise in maintaining and automating infrastructure and deployment processes. ● Advanced scripting and automation skills in Python, Bash, or similar languages. ● Hands-on experience with containerization and orchestration technologies, including Docker and Kubernetes. ● Expertise in designing and implementing infrastructure-as-code solutions using Terraform, Pulumi, CloudFormation, or similar tools. ● Hands-on experience with SST.dev for cloud application development and deployment. ● Strong familiarity with monitoring, logging, and alerting frameworks for proactive system management. ● Experience in automating security compliance, identity management, and network security best practices. Bonus Points: ● Previous experience in hedge funds, financial services, or trading technology. ● Expertise in compliance, regulatory frameworks, and security practices in financial technology. ● Experience with AI/ML development tools, including Cursor, GitHub Copilot, or similar, as well as other cloud-based machine learning infrastructure. Benefits: ● Competitive compensation package, including performance-based bonuses. ● Exposure to cutting-edge financial technology and innovative trading systems. ● Opportunities for professional growth, leadership, and continued learning in a high-impact role. ● Flat organizational structure with direct impact on business outcomes. ● Flexible remote work options for exceptional candidates.
Skills: Jenkins, Terraform, Java, npm, DevOps, Amazon Web Services, MySQL, Docker, Git
Budget:
not specified
1 day ago
|
|||||
Ad-Hoc Cloud Engineer Needed for GCP Projects
|
20 - 50 USD
/ hr
|
1 day ago |
Client Rank
- Excellent
$712 310 total spent
245 hires
312 jobs posted
79% hire rate,
open job
4.75
of 141 reviews
|
||
Featured
We are seeking a highly skilled cloud engineer to join "our bench" to assist with various ad-hoc professional service projects, heavily focused on Google Cloud technologies.
Requirements: Hands-on experience with GCP AND AWS (minimum of 4 years) Kubernetes (3+ years) Terraform (3+ years) Building ci/cd pipelines Configuration management Proficient in Helm or Terragrunt Understanding of advanced deployment strategies (Blue/Green, Canary, etc) Monitoring tools such as Google Cloud monitoring, Grafana, etc. Experience with Atlassian AI experience a plus! If this sounds like a good fit, please submit a proposal so we can discuss the next steps!
Skills: Cloud Engineering, Google Cloud Platform, DevOps, CI/CD, Kubernetes, Terraform
Hourly rate:
20 - 50 USD
1 day ago
|
|||||
Site Reliability (DevOps) Engineer - Contract Position
|
not specified | 1 day ago |
Client Rank
- Excellent
6095 jobs posted
100% hire rate,
open job
4.95
of 16946 reviews
|
||
Upwork Enterprise Client
Upwork ($UPWK) is the leading tech solution for companies looking to hire the best talent, maintain flexibility, and get more done. We’re passionate about our mission to create economic opportunities so people have better lives. Every year, more than $2 billion of work is done through Upwork by skilled professionals who want the freedom of working anytime, anywhere. Top companies connecting with extraordinary talent around the globe? Upwork is how.
This position is through Upwork’s Talent Innovation Program (TIP). Our TIP team is a global group of professionals that augment Upwork’s business. Our TIP team members are located all over the world. Upwork is the largest freelance site in the world, with access to the most qualified freelancers, and our enterprise customers leverage this ability to rapidly and effortlessly source high quality talent from all over the world. We also provide as part of our enterprise offering compliance and onboarding tools and advanced reporting capabilities. We are seeking an experienced Site Reliability (DevOps) Engineer to support its main website. This position will focus on two areas: 1) Incident Response. Candidates will help us improve our monitoring tools and automation to improve our site reliability by identifying weaknesses and working with our development team to address those gaps. You will also help us to manage the process of handling any type of incident impacting upwork.com, including coordination, communication, and debugging, and remediations. 2) Project-oriented work (Chaos Engineering, Observability, Auto-Remediation, Resilience) and general SRE ticket work with a particular focus on assisting our developers. This includes supporting and monitoring new and existing services, platforms, and application stacks, automation scripting, writing Terraform code, using AWS services and tools, managing nginx load balancers, managing DNS, configuring our CDN and assisting in debugging code in collaboration with developers. This is an opportunity to work with a major revenue-producing website with millions of users. In addition to making sure everything works you are also expected to contribute to the continuous improvement of our environment. This is a full time position (~40 hours per week, Monday-Friday) This role will participate in our production on-call rotation in your day-time and on some weekends (once every 2 weeks). Key Qualifications: 3+ years experience as a Site Reliability Engineer or Devops role, with primary focus on managing cloud-based services and infrastructure Experience with AWS (EC2, S3, ECS, VPC, ElasticSearch, Lambda), Linux system administration and monitoring tools (Prometheus, Grafana, Cloudwatch, Datadog, Dynatrace) Have good working knowledge of load balancer, firewalls and TCP/IP networking architecture Strong programming skills in Python, Terraform. Should have critical thinking, good debugging and problem solving skills Automation advocate - you truly believe in removing operation load with software Familiarity with micro services architecture and container orchestration with Kubernetes Experience with scale testing, disaster recovery, and capacity planning Excellent verbal and written communication skills (English) Responsibilities: Incident Management: Play an active role in production on-call, responding swiftly to troubleshoot and resolve production issues. Ability to size-up a situation, assess the effectiveness of various tactics/strategies, and make rapid decisions on appropriate courses of action Ensure high availability by implementing and maintaining resilient cloud architectures, monitor system performance and proactively identify and resolve potential points of failure Develop and maintain automation scripts, tools and processes to streamline system deployment, monitoring tasks and eliminate toil/reduce operational overhead Create and maintain comprehensive dashboard and playbooks for production on-call Continuously improve the production on-call experience and system sustainability/ effectiveness Identify areas to improve service resilience through techniques such as chaos engineering, performance/load testing, etc An out-of-the-box, critical thinker and you don’t just understand the challenges at the present but also know what to plan and do to improve in the future Upwork is proudly committed to fostering a diverse and inclusive workforce. We never discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical condition), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Skills: Python, Jenkins, Git, Linux System Administration, Amazon EC2, Java, Progress Chef, Jira
Budget:
not specified
1 day ago
|
|||||
Live ops/ Dev Ops engineer for Short term contract
|
20 - 30 USD
/ hr
|
1 day ago |
Client Rank
- Medium
4 jobs posted
open job
|
||
Please read this ad carefully.
Hello, We are looking for a freelancer (no agencies please) to work with our remote US client. A successful candidate will have excellent written and verbal communication skills, attention to detail, and reliability. Please put the word pumpkin in the first sentence of your reply to this post. Skills: Familiarity with AWS and it's security infrastructure is a must. Other tools such as Grafana is excellent. Familiarity w/ Hathora is a huge plus.
Skills: Microsoft Windows, Linux, Ubuntu, DevOps Engineering, Deployment Automation, CI/CD, DevOps, Amazon Web Services, Grafana
Hourly rate:
20 - 30 USD
1 day ago
|
|||||
NOC Monitoring System Requirement
|
not specified | 20 hours ago |
Client Rank
- Risky
|
||
NOC Monitoring System Requirement Summary
About Us Somcable is a regional wholesale internet provider operating submarine and terrestrial fiber infrastructure in East Africa. We deliver high-capacity IP Transit, peering, and interconnection services to carriers, ISPs, and enterprises. Main Objective We are looking to deploy a centralized monitoring system to: - Monitor all routers/Switches, ESXi hosts, and Cisco NCS 2000 transmission equipment - Automatically raise alerts for faults or threshold breaches - Assign alerts as actionable tickets to the NOC team Key Requirements - Device monitoring (BGP sessions, interface utilization, CPU/RAM, link state, power, etc.) - Alerting and notification engine with escalation workflows - Web dashboards for live status views - Integration with a ticketing platform to assign alerts to engineers - Prefer on-prem deployment Preferred Tools - Monitoring: Prometheus, Grafana, InfluxDB, Fluentd, FastNetMon - Ticketing: Zammad (our selected tool for incident management) We are open to better solutions if you can recommend more efficient or integrated alternatives. Our goal is fast deployment, minimal overhead, and long-term reliability.
Skills: Grafana, Prometheus, Kubernetes, System Monitoring, Alert Notifications, Python, Next.js, Network Monitoring
Budget:
not specified
20 hours ago
|
|||||
Windows Server Maintenance & DevOps Engineer (CI/CD, Automation, Monitoring)
|
500 USD | 17 hours ago |
Client Rank
- Excellent
$301 622 total spent
1094 hires
108 jobs posted
100% hire rate,
open job
4.60
of 484 reviews
|
||
Ongoing role for a DevOps Engineer responsible for maintaining and improving a suite of Windows-based production environments and automation pipelines.
The role involves: Managing and maintaining multiple Windows Server 2019/2022 instances used in production and staging environments Implementing and maintaining CI/CD pipelines using GitHub Actions and custom PowerShell scripts Automating deployments of backend services built with Python (FastAPI, Flask) and Node.js, containerized with Docker Setting up and maintaining monitoring and logging solutions using Prometheus/Grafana/ELK Automating routine server maintenance tasks (e.g., disk cleanup, log rotation, security patching) Ensuring high uptime, failover readiness, and consistent performance tuning Implementing Redis/PostgreSQL backups, firewall rules, SSL setup, and domain/SSL renewals Integrating cloud services (e.g., AWS/GCP) with on-premise setups via VPN tunnels or hybrid deployment patterns Ongoing tasks include: Debugging deployment failures Monitoring server health and usage Adding pipeline improvements and secrets rotation Periodic load testing and Docker image updates User permission and environment config audits 📌 Stack: Windows Server, GitHub Actions, Docker, Python, Node.js, Redis, PostgreSQL, PowerShell, Prometheus, NGINX, AWS/GCP This is a long-term contract for infrastructure support and improvements, with opportunities to expand into full-scale DevOps architecture.
Skills: Microsoft Windows, DevOps, CI/CD, Deployment Automation
Fixed budget:
500 USD
17 hours ago
|
|||||
Windows Server Maintenance & DevOps Engineer (CI/CD, Automation, Monitoring)
|
500 USD | 17 hours ago |
Client Rank
- Excellent
$20 275 total spent
5 hires
3 jobs posted
100% hire rate,
open job
4.96
of 4 reviews
|
||
Ongoing role for a DevOps Engineer responsible for maintaining and improving a suite of Windows-based production environments and automation pipelines.
The role involves: Managing and maintaining multiple Windows Server 2019/2022 instances used in production and staging environments Implementing and maintaining CI/CD pipelines using GitHub Actions and custom PowerShell scripts Automating deployments of backend services built with Python (FastAPI, Flask) and Node.js, containerized with Docker Setting up and maintaining monitoring and logging solutions using Prometheus/Grafana/ELK Automating routine server maintenance tasks (e.g., disk cleanup, log rotation, security patching) Ensuring high uptime, failover readiness, and consistent performance tuning Implementing Redis/PostgreSQL backups, firewall rules, SSL setup, and domain/SSL renewals Integrating cloud services (e.g., AWS/GCP) with on-premise setups via VPN tunnels or hybrid deployment patterns Ongoing tasks include: Debugging deployment failures Monitoring server health and usage Adding pipeline improvements and secrets rotation Periodic load testing and Docker image updates User permission and environment config audits 📌 Stack: Windows Server, GitHub Actions, Docker, Python, Node.js, Redis, PostgreSQL, PowerShell, Prometheus, NGINX, AWS/GCP This is a long-term contract for infrastructure support and improvements, with opportunities to expand into full-scale DevOps architecture.
Skills: DevOps, CI/CD, Deployment Automation, System Administration, Docker, CI/CD Platform
Fixed budget:
500 USD
17 hours ago
|
|||||
Quality Assurance Lead (QA)
|
7 - 15 USD
/ hr
|
14 hours ago |
Client Rank
- Good
$8 664 total spent
10 hires
39 jobs posted
26% hire rate,
open job
5.00
of 7 reviews
|
||
Responsibilities
-Lead, mentor, and manage a team of automation and manual QA engineers. -Define and implement robust test strategies, plans, and processes across multiple projects. -Ensure comprehensive automated and manual test coverage for all new and existing features. -Design, build, and maintain scalable automation frameworks for web, mobile, and API testing using tools like Selenium, Appium, BrowserStack and Cypress. -Oversee integration of test automation suites with CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps, etc.). -Collaborate closely with product, engineering, and design teams to uphold high-quality standards from requirements to release. -Facilitate defect triage, root cause analysis, and implement continuous quality improvements. -Monitor and analyze key QA metrics, providing actionable insights and reports to stakeholders. -Champion best practices in testing, documentation, and continuous QA process improvement. -Encourage the exploration and adoption of emerging tools, including AI-driven QA solutions, to enhance testing efficiency and innovation. Required Skills and Experience (2-5 years): -2 to 5 years of hands-on experience in Quality Assurance, including both manual and automation testing across web, mobile, and API platforms. -Strong hands-on experience in manual testing, including regression, integration, system, and exploratory testing. -Proven expertise in automation tools such as:Selenium for web UI testing -Appium/BrowserStack for mobile testing -Cypress for JavaScript-based applications -Solid understanding of CI/CD tools like Jenkins, GitLab CI, Azure DevOps, etc. -Solid experience with API testing tools such as Postman, RestAssured, or equivalent. -Experience with performance, load, and stress testing tools such as Apache JMeter, Grafana k6, or similar, with the ability to design and execute test scenarios to evaluate system scalability, reliability, and responsiveness. -Familiarity with version control systems (e.g., Git) and test management tools (e.g., JIRA, Monday Dev, TestRail, Zephyr). Deep understanding of Agile/Scrum methodologies. -Strong leadership, communication, and stakeholder management capabilities. -Exposure to or strong interest in AI-powered testing tools and platforms (e.g., Testim, Mabl, Applitools, Functionize), with a willingness to experiment with and integrate cutting-edge technologies into QA workflows. -Be able and willing to implement AI technologies in the QA environment to achieve maximum efficiency, with prior exposure to AI-powered testing tools considered a plus.
Skills: API Testing, Integration Testing, GitLab, Manual Testing, Jenkins
Hourly rate:
7 - 15 USD
14 hours ago
|
|||||
Dev Ops Engineer
|
5 - 15 USD
/ hr
|
12 hours ago |
Client Rank
- Good
$8 664 total spent
10 hires
39 jobs posted
26% hire rate,
open job
5.00
of 7 reviews
|
||
DevOps Engineer
Experience Level: 2–4 Years Location: Remote About the Role : We are seeking a skilled and proactive DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for automating, scaling, and maintaining our cloud infrastructure while ensuring high availability and performance. You will also drive efficient collaboration between development and operations through seamless version control and CI/CD workflows. Key Responsibilities 1. Infrastructure Management and Automation: o Design, implement, and maintain AWS cloud infrastructure, including EC2, S3, EKS, and related services. o Automate infrastructure provisioning, scaling, and configuration using tools like Terraform or CloudFormation. 2. Version Control and Collaboration: o Manage repositories and workflows in GitHub and GitLab, ensuring code integrity and streamlined collaboration. o Implement branch management, merge strategies, and enforce best practices in version control. 3. Continuous Integration/Continuous Deployment (CI/CD): o Build and maintain robust CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or equivalent tools. o Ensure seamless integration and delivery of software by automating deployments and monitoring. 4. Monitoring and Logging: o Implement and maintain monitoring solutions using Prometheus, Grafana, and AWS CloudWatch. o Analyze logs and troubleshoot system issues to ensure optimal performance and uptime. 5. Containerization and Orchestration: o Deploy and manage containerized applications using Docker and Kubernetes. o Optimize Kubernetes clusters for scalability and reliability. 6. Security and Maintenance: o Manage system patching and updates to address vulnerabilities proactively. o Enforce security best practices across the infrastructure. 7. Troubleshooting and Support: o Conduct root cause analysis of infrastructure and application issues. o Collaborate with development and product teams to resolve technical challenges. 8. Collaboration and Documentation: o Document infrastructure, processes, and troubleshooting guides. o Communicate effectively across teams to ensure alignment on technical initiatives. Key Skills and Qualifications · Technical Expertise: o Hands-on experience with AWS services such as EC2, S3, EKS, RDS, and CloudFront. o Proficiency in configuring and managing CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI/CD. o Expertise in version control tools, specifically GitHub and GitLab, including workflow setup and management. o Knowledge of Docker and Kubernetes for containerization and orchestration. o Experience with monitoring tools such as Prometheus and Grafana. o Strong skills in log analysis, troubleshooting, and root cause identification. · Scripting and Automation: o Proficiency in scripting languages such as Python, Bash, or PowerShell for automation. o Familiarity with Infrastructure-as-Code (IaC) tools like Terraform or AWS CloudFormation. · Soft Skills: o Excellent communication and interpersonal skills. o Strong problem-solving and analytical skills. o Ability to work in a fast-paced, collaborative environment. · Preferred Qualifications: o Experience with patch management and system updates. o Knowledge of security and compliance best practices. o Relevant certifications such as AWS Certified DevOps Engineer or Certified Kubernetes Administrator (CKA).
Skills: Docker, Kubernetes, DevOps, CI/CD Platform, Scripting, Grafana
Hourly rate:
5 - 15 USD
12 hours ago
|
Streamline your Upwork workflow and boost your earnings with our smart job search and filtering tools. Find better clients and land more contracts.