hero
Glynn Capital
49
companies
1,800
Jobs

Staff Software Engineer - PRO

OpenGov

OpenGov

Software Engineering
Pune, Maharashtra, India
Posted on Jan 24, 2025
OpenGov is the leader in AI-enabled software for cities, counties, state agencies, and special districts. With a mission to power more effective and accountable government, OpenGov serves 2,000 communities across the United States. OpenGov is built exclusively for the unique asset management, permitting and licensing, procurement and contract management, tax and revenue, budgeting and planning, and financial management needs of the public sector. The OpenGov platform empowers organizations to operate more efficiently, adapt to change, and strengthen public trust.
Learn more or request a demo at opengov.com
Job Summary:
As a Staff Software Engineer - Data Science at OpenGov, you will play a key leadership role in shaping the data architecture that powers best-in-class SaaS solutions, enabling efficiency, transparency, and accountability in government operations. You will design, build, and optimize a robust, scalable, high-performance data infrastructure, leading efforts across data engineering, real-time streaming, change data capture (CDC), data modeling, transformation, governance, and applied machine learning.
In this role, you will work at the intersection of engineering, analytics, and business intelligence, owning the entire data lifecycle—from ingestion and processing to modeling and visualization. You’ll build real-time and batch-processing pipelines, optimize large-scale distributed systems, and drive the best data governance and security practices in an AWS-first cloud environment.
A typical day involves solving complex data challenges to ensure speed, scalability, and reliability. You will collaborate with a globally distributed cross-functional team of product managers, UX engineers, data visualization experts, and platform engineers to transform raw data into actionable insights.
We are looking for a passionate, strategic thinker who thrives on navigating the challenges of modern data architecture. You’re a self-starter, problem-solver, and technical leader who values clean, efficient, scalable solutions. At OpenGov, we embrace collaboration, innovation, and impact, and we’re excited to bring on someone who shares these values to help build the future of data-driven applications.

Responsibilities:

  • Architect and develop robust, highly performant, scalable data processing pipelines for real-time and batch workloads.
  • Lead end-to-end ownership of data infrastructure, including ingestion, transformation, storage, governance, and security in an AWS-first cloud environment.
  • Define and implement the data strategy, partnering with engineering leadership to align on best practices for data engineering, data science, and analytics.
  • Build and optimize real-time data streaming pipelines using Kafka and change data capture (CDC) to enable low-latency analytics and event-driven architectures.
  • Model and transform data for scalable analytical workloads.
  • Implement data quality frameworks, schema evolution strategies, and governance policies to ensure data integrity, lineage, and observability.
  • Collaborate cross-functionally with product managers, UX teams, and data visualization engineers to deliver data-driven features.
  • Develop reusable, maintainable, modular components that enhance OpenGov’s data infrastructure.
  • Provide mentorship and technical leadership to data engineers, fostering a culture of excellence and best practices.
  • Write and maintain detailed technical documentation, ensuring clarity in system designs, API contracts, and architecture decisions.
  • Advocate for performance, security, and scalability best practices, proactively identifying technical weaknesses and crafting plans to address them.
  • Contribute to OpenGov’s culture of innovation, adopting emerging data technologies and influencing engineering-wide improvements.
  • Lead initiatives that solve the organization’s most complex data challenges, delivering tangible business impact.
  • Drive continuous improvement in engineering efficiency, enabling faster experimentation, deployment, and monitoring of data solutions.
  • Delight customers and stakeholders by delivering highly reliable, scalable, and insightful data solutions.

Requirements and Preferred Experience:

  • BA/BS in Computer Science, Data Science, Engineering, or a related technical field, or equivalent professional experience.
  • 12+ years of professional experience in software and data engineering, focusing on cloud-native architectures and large-scale data processing.
  • 7+ years of experience designing and implementing scalable, highly-available, and high-performance data platforms, preferably in a multi-tenant SaaS environment.
  • 5+ years of experience with AWS data and analytics services, including S3, Redshift, Glue, EMR, Kinesis, Lambda, and DynamoDB.
  • 5+ years of experience building, optimizing and maintaining large-scale data pipelines using Spark, Airflow, or similar technologies.
  • 5+ years of experience with SQL and NoSQL databases, including PostgreSQL, DynamoDB, Elasticsearch, OpenSearch, or similar platforms.
  • 3+ years of experience with data streaming and event-driven architectures, including Kafka, Kinesis, and CDC (e.g., Debezium, AWS DMS).
  • 3+ years of experience designing and maintaining modern data lakes and warehouses, with expertise in data partitioning, indexing, and performance tuning.
  • (Preferred) Strong understanding of data governance, security, and compliance best practices, including IAM, encryption, lineage tracking, and access control policies.
  • (Preferred) Expertise in designing, implementing, and maintaining scalable microservices and RESTful data processing and analytics APIs.
  • (Preferred) Experience with real-time analytics and event-driven architectures, such as Apache Flink or AWS Kinesis Analytics.
  • (Preferred) Hands-on experience implementing ML/AI pipelines using AWS SageMaker.
  • (Preferred) Familiarity with BI and visualization tools like Metabase, Tableau, Looker, or Power BI.
Why OpenGov?
A Mission That Matters
At OpenGov, public service is personal. We are passionate about our mission to power more effective and accountable government. Government that operates efficiently, adapts to change, and strengthens public trust. Some people say this is boring. We think it’s the core of our democracy.
Opportunity to Innovate
The next great wave of innovation is unfolding with AI, and it will impact everything—from the way we work to the way governments interact with their residents. Join a trusted team with the passion, technology, and expertise to drive innovation and bring AI to local government. We’ve touched 2,000 communities so far, and we’re just getting started.
A Team of Passionate, Driven People
This isn’t your typical 9-to-5 job; we operate in a fast-paced, results-driven environment where impact matters more than simply clocking in and out. Our global team of 800+ employees is united in our commitment to challenge the status quo. OpenGov is headquartered in San Francisco and has offices in Atlanta, Boston, Buenos Aires, Chicago, Dubuque, Plano, and Pune.
A Place to Make Your Mark
We pride ourselves on our performance-based culture, where every employee is encouraged to jump in head-first and take action to help us improve. If you have a great idea, we want to hear it. Excellent performance is recognized and rewarded, and we love to promote from within.
Benefits That Work for You
Enjoy an award-winning workplace with the benefits to match, including:
- Comprehensive healthcare options for individuals and families.
- Flexible vacation policy and paid company holidays
- 401(k) with company match
- Paid parental leave, wellness stipends, and HSA contributions
- Professional development and growth opportunities
- A collaborative office environment with weekly catered lunches