This position is designed for the selected candidate(s) to work full-time and get contracted by a Canada-based FinTech client of CodersNow.
- This position offers a highly competitive salary.
- This is a remote position from anywhere in Latin America.
- High English proficiency is highly a must.
- Reports to: Team Lead / CTO.
- Interview rounds: 2-4.
We provide a single API to all major accounting software (QuickBooks, Xero, Sage, etc.) service providers and enables on-demand access to financial transactions, analytics, insights and reports on Small Business customers. Our solution suits any lender, financial institution, accounting firm, auditor and tech developer that requires financial data on its Small Business customer for the purpose of reviewing a credit application or assessing the financial health of a business.
Our Data as a Service solution allows our customers to be up and running in hours. We provide quick, Low Cost and direct access to both your existing and new customers’ accounting software via our single API.
We are on a mission to provide our accounting API Data as a Service solution, supported by our Machine Learning applications, for everyone in order to expand our focus beyond banking to every business that could benefit from having real-time accounting data on their Small Business customers.
WHO YOU ARE:
You are a passionate DevOps practitioner with a strong Software Engineering or Systems Engineering background and the desire to work on challenging projects that will require you to constantly master new skills and technologies, particularly in the following 3 areas:
- Area 1 – Cloud Infrastructure Automation:
- Design, implement and manage cloud solutions and reference architectures
- State of the art Infrastructure-as-Code practices with tooling such as: HashiCorp Terraform, Terraform Cloud, Atlantis, Terragrunt, AWS CloudFormation, AWS Python SDK (boto3)
- Area 2 – Kubernetes + K8s-native CI/CD and GitOps:
- Design, implement and manage Kubernetes workloads in a multi-cluster environment
- Implement advanced features such as horizontal pod autoscaling and cluster autoscaling
- Modern CI/CD tooling for/on Kubernetes: Helm, ArgoCD, Tekton, Jenkins-X, HashiCorp Vault
- Area 3 – Observability:
- Implement metrics-driven automation, performance monitoring, log aggregation and analytics
- Tooling: AWS CloudWatch, New Relic, Fluentd, Prometheus, Grafana, Logstash, Kibana, Elasticsearch (ELK)
Working in the infrastructure team, you will participate in the design, delivery and management of cloud infrastructure for a 100% cloud-native, highly-available, distributed data platform consisting of several products, including our Data-as-a-Service accounting API.
- Be a fervent practitioner of best DevOps practices:
- Design, implement and manage cloud infrastructure following reference architectures, Infrastructure-as-Code, CI/CD and GitOps.
- Contribute to software development with focus on configuration management, security, reliability, performance efficiency and operational excellence.
- Pursue automation relentlessly by writing good, automatable, code: Terraform (HCL,
- Golang), Kubernetes (YAML, Golang), CI/CD pipelines (various DSLs), Dockerfiles and any other necessary scripting/programming in Bash, Node.js and Python.
- Diligently use the Atlassian suite of products (Bitbucket, Jira, Confluence) for version control, management and documentation of an agile development lifecycle (write stories & tasks, work on feature branches, write automated tests, open pull-request, participate in code-reviews).
- Manage & monitor cloud infrastructure, including relational and NoSQL databases such as MySQL and MongoDB, as well as other transient data stores such as RabbitMQ and Redis cache.
- Contribute your knowledge and ideas to enhance our SDLC and development practices.
- Adhere to an agile development process, and lead the constant refinement and evolution of CI/CD.
SKILLS & QUALIFICATIONS:
- Strong familiarity and practical experience with Linux (Amazon Linux, CentOS, Ubuntu,
- Alpine) and their package managers (yum, rpm, apt, apk)
- Effective knowledge of Bash scripting and common Linux/GNU utilities, such as find, awk, grep, sed, xargs, vim
- Effective knowledge of SSH, ssh-keygen, ssh-add, SSH tunnelling.
- Strong familiarity and practical experience with Git.
- Strong familiarity and practical experience with Docker, Dockerfiles, docker-compose.
- Proficiency with at least one high-level programming language such as Node.js, Python or Golang.
- Working knowledge of relational databases (MySQL) and NoSQL databases such as
- Experience in the design and implementation of highly-available, fault-tolerant, distributed systems, replication, failover and disaster recovery.
- Familiarity with Agile software development methodologies such as Scrum.
- Self-motivated, with good organizational and communication skills.
- Bachelor of Science in Computer Science/Computer Engineering.
In addition to the core competencies listed above, a good candidate would score highly in at least 2 of the 3 aforementioned areas (all three would be the unicorn):
Area 1 – Cloud Infrastructure Automation:
- 5+ years of production experience, with at least 3 working within the AWS cloud (preferred) or Azure/Google Cloud Platform (considered). This includes, but is not limited to:
- VPC, route tables, internet/NAT gateways, security groups, network ACLs
- EKS, ECS, ECR, EC2, autoscaling, load balancing, elastic IPs, EBS volumes, snapshots
- IAM, users, roles, policies, cross-account access; OAuth2, OpenID Connect, SAML
- Rotue53, Certificate Manager, CloudFront, S3, RDS Aurora MySQL, ElastiCache Redis
- Working knowledge of Terraform with remote state backends (preferred) or CloudFormation to provision and manage immutable infrastructure. Good to know: Terragrunt, Atlantis.
Area 2 – Kubernetes + K8s-native CI/CD and GitOps:
- Strong familiarity and practical experience with Kubernetes workloads: Deployments, StatefulSets, Services, Jobs, Ingress, PersistentVolumes, PersistentVolumeClaims, ConfigMaps, Secrets.
- Declarative object configuration with kubectl, Kustomize and YAML manifests. Helm charts.
- Cluster administration: networking, logging architecture, metrics server, kube-proxy.
- Knowledge of modern K8s-native tooling such as Tekton Pipelines and GitOps with ArgoCD
Area 3 – Observability:
- Experience implementing performance monitoring and visualization with tools such as Prometheus, Grafana, NewRelic and AWS CloudWatch.
- Experience implementing log aggregation and visualization with tools such as Fluentd, Logstash, Kibana and AWS CloudWatch.
- Experience with SIEM and analytics solutions such as SumoLogic, DataDog or Elasticsearch
- Prior experience with SaaS products and startups.
- Experience working with payment and financial platforms (e.g. Stripe, QuickBooks, etc.).
MAIN BENEFITS (contractor position):
- Equity options
- Work-from-home expense reimbursement
- Annual company retreat (paid expenses)
- Unlimited Vacation Policy