- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS Big Data technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases including Postgres, SQL Server, Oracle, MySQL, Solr, Elasticsearch, Cassandra.
- Experience with data pipeline and workflow management tools: Luigi, Airflow, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, Kafka, etc.
- Experience with object-oriented/object function scripting languages: Python or Java.
- Having knowledge in the financial market, or the stock market will be a plus (but don’t worry if you haven’t, you can learn all about how to be a successful stock investor after joining us!).
- Working in the environment with talented young engineers with experience in developing financial and securities products
- To be consulted and shared about the roadmap for capacity development and work at the company
- Competitive income (including 13 months salary + holiday bonus, Tet + job efficiency, annual business bonus);
- Receiving 100% of the salary probation if the job is passed in the first 2 months
- Consider annual income changes
- Equipped with high-profile computers for work
- Working in Grade A office area
- Free use of diverse technology books available in the work area
- Microwave ovens, refrigerators, coffee machines are always ready for all needs
- Social insurance, health insurance, unemployment insurance under labor law
- Annual health check-up at prestigious hospital
- Dynamic working environment, companion development
- Always appreciate personal intelligence, collective, creative ideas to develop the team and company
- Developing community cultural programs, courses, sharing and spreading knowledge
- Teambuilding, Dclub, Volunteering
- Send CV information to firstname.lastname@example.org with the title: Big data_Name or contact via Skype: phamminhtam.hr to find out more about the vacancy.