ArabyAds was founded in 2013 and we help our clients to acquire customers at scale using data & technology across diverse set of digital advertising inventories which are managed through four of our platforms i.e. Checkout (social commerce), iConnect (influencer marketing), Adfalcon (programmatic advertising) & DeviceBoost.
We are a team of +260 passionate people across UAE, Egypt, KSA, Jordon, Lebanon & Tunisia. We are agile, fast-growing, interdisciplinary team of passionate experts in media planning, influencer marketing, consumer research, developers & coders, data scientist, artificial intelligence, and machine learning with a shared mission to create sustainable value for our clients.
ArabyAds is proud to be an equal opportunity employer regardless of your race, colour, gender, religion, age, disability, or marital status. We currently have people from 16 nationalities across our 6 locations and are happy to celebrate all festivals.
The Data Engineer will be responsible for designing, developing and maintaining different databases, data warehouses, data lakes, data pipelines, ETLs/ELTs and data processing jobs. We are mainly on the AWS cloud!
He/she will work within the Data Engineering team and interact with internal teams like Account Managers, Sales, Data Scientists, Data Analysts, Application teams and others.
•Design, develop and fully own data engineering projects built from requirements by different stakeholders, including but not limited to product teams, data scientists, data analysts and BI.
•Design, develop and fully own any requirement related to databases required by the application and mobile teams.
•Create ingestion pipelines to collect data from different sources and push to targets.
•Design data models, create databases
, schemas, procedures, functions users, roles etc.
•Migrate existing databases and systems to Snowflake as and when required.
•Optimize data ingestion, computation and querying in Snowflake.
•Evaluate, test and update data applications according to changing requirements.
•Optimize current implementations for fault tolerance, speed and scalability. •
Monitor and report data systems health and security.
•Support internal teams to use our systems, troubleshoot & identify issues, and resolve them.
•Follow best coding practices and participate in the code review process.
•Use efficiently the development environment, compiling tools, version control and bug tracking systems for all projects.
•3-5 years of minimum experience in Data Engineering.
•1-2 years of minimum experience in Snowflake. Good knowledge of Snowflake databases, schemas, tables, procedures, functions, views, stages, pipes etc.
•1-2 years of experience in AWS.
•Strong hands-on experience in writing complex and efficient SQL queries
, Stored Procedures and Database code
•Experience in ingesting data from different file formats like XML, CSV, JSON, TSV, TXT, PARQUET and AVRO to Snowflake
•Familiarity with concepts, design and mechanics of traditional databases, data warehouses and data lakes.
•Familiarity with ETL, ELT, Transformation, Processing, Computation etc.
•Experience with various relational databases such as MySQL, PostgreSQL, MariaDB.
•Experience with Git, CI/CD, and writing unit tests.
•Experience working on an agile development project.
•Experience with Slack, Jira and the Atlassian tool suite.
Good to Have
Experience in Moving data from other Databases/Sources (MySQL, Hadoop, etc.) into Snowflake
•Experience with NoSQL databases such as DynamoDB, Redis, Elasticsearch, Cassandra, MongoDB
•Knowledge on one of the ETL tools
•Experience with queues and streams such as Kafka, AWS Kinesis
•Experience with performance and scalability issues.
•Data engineering experience/knowledge in Hadoop, Spark
•Knowledge of Python is a plus
Non Technical Competencies
•Very good English and communication skills.
•Positive and 'can do' attitude.
•Able to work independently as well as in a team.
•Able to take up challenges and come up with innovative solutions.
•We highly appreciate self-starters, initiators and contributors.