EY Senior Manager - Big Data, Advisory, Performance Improvement in Singapore, Singapore
Senior Manager - Big Data, Advisory, Performance Improvement
Requisition # SIN004GM
Post Date Jan 11, 2019
The EY Data and Analytics team are specialists in information management, advanced analytics and business intelligence. We implement the information-driven strategies and systems that offer the highest return on investment, profitability, and service or policy outcomes for our clients.
Our consultants work to create a lasting organisational culture that encourages people to use information and technology more creatively and more intelligently to get better business results.
As a Senior Manager in EY Data and Analytics team, you will deliver value-added services to our clients and you are required to be specialized in some or all of the following areas:
•Lead and manage all Analytics related solutions to Ernst & Young clients in the APAC region.
•Develop, build and manage business pipeline and client relationships.
•Contributing your expertise to strategy and roadmaps
•Communicate effectively with the EY Partners, the team and the client regarding the progress of the project and be a role model to the team members in exhibiting the Ernst & Young best practices.
At Ernst & Young, we know it's your point of view, energy and enthusiasm that make the difference.
•Lead clients’ engagements.
•Work effectively as a team and project director, sharing responsibility, providing support, maintaining communication, and updating senior team members on progress.
•Help prepare reports and schedules that will be delivered to clients and other parties.
•Develop and maintain productive working relationships with client personnel.
•Build strong internal relationships within Ernst & Young Advisory Services and with other services lines across the organization.
•Generate new client leads by leveraging on existing relationships and building new ones
•Lead business development initiatives from leads to qualification to proposal development and client presentations.
•Conduct performance reviews and contribute to performance feedback for staff.
•Contribute to people-related initiatives including recruiting and retaining staff.
•Contribute, guide and develop technical and functional skills of staff.
•Understand and follow workplace policies and procedures.
•Bachelor degree and above in Analytics, Information Systems Management, Computer Science or related fields.
•Hands on experience in implementing data integration processes, designing and developing data models(ER/Dimensional/Vault), designing, developing and building in detail ETL/ELT processes or programs.
•Contributed in at least 2 phases of SDLC lifecycle and experience in Big Data, data warehouse, data analytics projects, data migration, change management process, and/or any IM (Information Management) related works.
•Experience with Hadoop Technologies such as HDFS/MapRFS, Map Reduce(II), Advanced HDFS ACLS, Hive, HBase, Cassandra, Impala, Spark, Drill, Sentry, Sqoop, Flume, Kafka, Storm, Zookeeper and zkClient tool
•Good understanding on Cloudera or Horton Works or MapR Hadoop Distribution with deep understanding of administration concepts
•Experience in working with RDBMS technologies such as, Oracle, Microsoft SQL Server, PostgreSQL, DB2, MySQL etc. Experience in MPP database technologies such as Teradata
•Hands-on experience on Spark, SparkSQL, Hive QL, Drill QL, Impala, Spark Data Frames and Flink CEP, Flink TableAPI&SQL as ETL framework
•Hands-on programming skill on Scala/Python using Spark/Flink Framework
•Strong knowledge of Big Data stream ingestion and IoT streaming using Flume, or Kafka, Storm, MQTT, RabbitMQ
•Good understanding Spark Memory management with and without Yarn memory management
•Should have basic understanding on Cloudera Manager or HortonWorks Ambari and MapR Control System
•Should have experience developing and designing in one or more NoSQL database components and objects using Cassandra, Mongo, HBase, CouchDB/Couchbase, Elasticsearch
•Should have experience developing and designing in one or more NoSQL database technologies such as Cassandra, Mongo, HBase, CouchDB/Couchbase, Elasticsearch etc.
•Should good working knowledge of HCatalog and Hive Metadata.
•Should have working knowledge of Kerberos authentication tool
•Experience in commercial ETL tools like Talend, Informatica or Alteryx will be added advantage
•Greenplum, IBM Pure Data etc. will be an added advantage
•Experience in working with RDBMS technologies such as, Oracle, Microsoft SQL Server, PostgreSQL, DB2, MySQL etc. Experience in MPP database technologies such as Teradata, Greenplum, IBM Pure Data etc. will be an added advantage
•Good knowledge of data warehouse and data management implementation methodology.
•Good knowledge of the Information Management framework, including operating model, data governance, data management, data security, data quality and data architecture.
•Knowledge and experience in data visualisation concepts using tools such as SAS Visual Analytics or WRS, Tableau, Microsoft PowerBI or Reporting Services, IBM Cognos, SAP Business Objects, etc. will be an advantage.
•Ability to pick up new tools and able to be independent with minimal guidance from the project leads/managers.
•Strong analytical and creative problem solving capabilities.
•Ability to establish personal credibility quickly and demonstrate expertise.
•Ability to create a positive learning culture, coach and develop team members.
•4 to 10 years of experience in data warehouse, data analytics projects, change management process, and/or any IM (Information Management) related works.
•Delivered at least two (2) full SDLC lifecycle projects.
•At least one of the industry or domain experiences in Banking/ Telecommunications/ Consulting
•Preferably with experience in implementation best practices involving data management, data
reconciliation, data duping, scheduling, etc.
•Able to assess design considerations in the aspect of data management and integration
•Experience with Agile/SCRUM/Kanban software implementation methodology
•Should have good knowledge in DevOps engineering using Continuous Integration/Delivery tools such as Docker, Jenkins, Puppet, Chef, GitHub Atlassian Jira etc.
•Certification in any of Hadoop Big Data tool/technology, data integration, data management, or visualisation tools is an added advantage.
•Knowledge about the infrastructure paradigms such as OS, network etc. is an added advantage.