复旦大学论坛

 找回密码
 注册(开放注册)
搜索
查看: 34313|回复: 1

[招聘信息] 内推Pivotal中国研发中心GPDB,HAWQ,PM,CE

[复制链接]
发表于 2016-8-24 17:41:40 | 显示全部楼层 |阅读模式
帮转(>_<)收简历,可以内推,简历请发992647009@qq.com(>_<)求扩散,有兴趣筒子们投过去。
最好是已经毕业有经验的,应届生也可以。

Pivotal公司是由EMC和VMware联合成立的一家新公司。Pivotal希望为新一代的应用提供一个原生的基础,建立在具有领导力的云和网络公司不断转型的IT特性之上。Pivotal的使命是推行这些创新,提供给企业IT架构师和独立软件提供商。  Pivotal公司于2013年04月26日成立,开始作为一个独立的实体运营。同时披露Pivotal One新一代PaaS计划,Pivotal One将是第一个集成新的分片式数据(Data Fabric)、现代编程框架、云便携性和遗留系统支持的平台。成立时就获得通用电气注资1.05亿美元,注资后通用电气持有Pivotal Initiative 10%的股票。



标题:姓名+职位+from Pivotal Wechat

期待您的加入!



Software Engineer- GPDB
You

You have a passion for developing large distributed systems to manage data. You may have worked on Massively Parallel Processing [MPP] systems solutions, SQL on Hadoop, in memory grid systems or perhaps even NoSQL systems. Whichever kinds of ‘Big Data’  solutions you’ve worked on, you understand that software that manages data at scale is a little different from other kinds of systems. Solutions that work fine on single nodes with gigabytes of data can fall over when scaled to petabytes of data and thousands of nodes. You relish discovering solutions that perform at scale while preserving the sanctity of customer data. You are curious, love to learn, and love to share your latest discoveries. Above all, you love shipping software as a member of a collaborative team.

Us

At Pivotal, our mission is to enable customers to build a new class of applications, leveraging big and fast data, and do all of this with the power of cloud-independence. We are at the epicenter of Cloud, Big Data and Mobile and we’re actively investing in all three areas. Pivotal’s broad portfolio of products includes the Big Data Suite, the most complete approach to enterprise data lakes and advanced analytics; Pivotal Cloud Foundry, the industry leading Platform as a Service product; and world leading ultra-agile application development through Pivotal Labs where we’re transforming how the world builds software.

The Big Data Suite includes Greenplum Database (GPDB), our massively parallel data warehouse;  HAWQ, our SQL on Hadoop solution; GemFire, our distributed in-memory key-value store; and MADlib, our machine learning solution. Open source is an important part of our strategy and all of our products are open source.

The Pivotal Data engineering team tackles the technical challenges that come with massively parallel distributed systems operating on petabytes of data across thousands of nodes. We delve into areas like query optimization, high-performance in-memory transaction and query processing, parallel and distributed execution of advanced data processing algorithms, resource management, and storage. Here at Pivotal you'll be working on hard, worthwhile problems with a collaborative team, accelerating your growth as an engineer.

Desired Skills/ Experience:

Deep, meaningful, recent development experience with one or more of the following is required:

Database kernel or related internals
In-memory object stores
SQL on hadoop
Machine learning
Query execution or cost based query optimization
Highly scalable, distributed, data driven applications
Linux systems code development in c/c++ and/or golang/python
Products deployed in highly available, mission critical, enterprise environments


Optional, but highly desirable skill sets:

A deep understanding of Postgres internals from versions 8.2 to 9.x
Domain knowledge of high speed transaction processing, or resource management, or high scale scheduling
Development experience with in-memory grid systems
Extremely large scale application development – data systems in the multi-petabyte range
Exposure to pair-programming, test-driven development, continuous integration and other agile or XP engineering practices
Experience with the challenges of testing distributed systems.


Software Engineer- HAWQ
You

You have a passion for large distributed systems to manage data on a massive scale.
You love building highly concurrent systems that are fault tolerant and extremely reliable.
You follow current trends in topics such as stream processing and in-memory computing.
You’d really like to believe there’s a way to defy the CAP theorem and get consistency, availability, and partition tolerance all in the same system (even if no one else has managed to do that yet). Above all, you love shipping software as a member of a collaborative team.

Us

At Pivotal, our mission is to enable customers to build a new class of applications, leveraging big and fast data, and do all of this with the power of cloud-independence. Pivotal’s offering includes the Big Data Suite, the most complete approach to enterprise data lakes and advanced analytics; Pivotal Cloud Foundry, the industry leading Platform as a Service product; and world leading ultra-agile application development through Pivotal Labs. Open source is an important part of our strategy. Many of our products are already open source; those that are not will be soon.

The Big Data Suite includes HAWQ, our SQL on Hadoop solution; Greenplum Database (GPDB), our massively parallel data warehouse; GemFire, our distributed in-memory key-value store; and MADlib, our machine learning solution.

The Apache HAWQ engineering team at Pivotal tackles challenges that come with massively parallel distributed systems operating at extreme scale. We delve into areas like query optimization, parallel query execution, scalable distributed data structures and fault-tolerance paradigms.  Here at Pivotal you'll be working on hard problems with a collaborative team, accelerating your growth as an engineer.

Desired Skills and Experience:

BS/MS/PhD students in Computer Science or equivalent, with coursework or experience in distributed systems.
Strong C/C++, particularly in concurrent programming techniques.
Keen understanding of state-of-the-art techniques and trends in data management, high-scale network applications, and distributed algorithms.
Excellent communication and collaboration skills.


Product Manager
You

You have a vision for Big and Fast Data that goes beyond the buzzwords. You understand the potential for the connection between real-time data management system, microservices, and applications. You approach data problems from a human-centric understanding, knowing that behind every data problem there is a person and a use case.

While you see the grand vision of high performance databases at scale, you are able to layout the individual stepping stones to get there. You constantly are linking customer pain points to individual user stories that convey the what, not the how.

You take an analytical and iterative approach to building software. You are constantly moving a product forward and skeptical of your decisions, building in feedback loops to validate your thinking.  You are not a 10,000ft product manager. You relish the opportunity to work in close collaboration with engineers and designers to collaboratively build software.

Us

At Pivotal, our mission is to enable customers to build a new class of applications, leveraging big and fast data, and do all of this with the power of cloud independence. We are at the epicenter of Cloud, Big Data and Mobile and we’re actively investing in all three areas.

We practice what we preach:

We practice Extreme Programming and Test--Driven development, and Balance Team.
We are passionate about open--source. Ours products are based on 100% open--source software.


Responsibilities:

Develop a product vision by engaging with internal stakeholders, customers, and the broader PaaS community, then break that vision down into an actionable backlog of user stories for the development team
Work hands-on with the development team to prioritize, plan, and deliver software that meets your requirements
Focus on delivering a Minimum Viable Product through careful and deliberate prioritization
Help innovate and iterate on agile PM processes and share our learnings
Actively engage customers and the community of users


Desired Skills and Experience:

3+ years of product management or relevant experience, especially with distributed systems. We welcome engineers who are looking to make a switch to product management.
Skilled at defining and prioritizing product features
Ability to effectively gain buy in and teach new skills, practices, approaches and values
Strong leadership and communication skills and the ability to teach others
Ability to work collaboratively with others and navigate complex decision making
Ability to collaborate well with engineers, designers, and clients
Previous success working with an agile development team
Previous success working on open-source products
Strong technical skills, with a background in software development or database operations
Knowledge of Pivotal Tracker is a distinct advantage
Willingness to travel (20% or less) when needed


Customer Engineer
Expectations:

Troubleshooting database, hardware, OS, networking for customers and internal departments (Professional Services, Systems Engineering, Platform Engineering)
Apply advanced and in-depth knowledge to analyze, diagnose, replicate, troubleshoot and resolve standard to highly complex technical customer reported issues with our big data product(Greenplum database).
Contribute to our support and customer success by taking initiatives to improve process, teamwork, and/or any other area that would improve overall team productivity.
Handle stressful situations effectively and escalate as appropriate.
Assist with any other initiatives as needed (testing new product features, proactive support).
Take ownership, manage and maintain up-to-date status on all supported requests
Assist and mentor junior staff in team to resolve complex issues
Escalate unresolved issues that require more in-depth knowledge to engineering in a timely manner
Report and submit product defects to our engineering team using the appropriate channel or tool
Create and peer review new knowledgebase articles
Provide after business hour support on a rotation basis
Sharing all acquired knowledge within and across teams
Actively contribute to the community surrounding the Pivotal Data Suite products via mailing lists, Forums and Knowledge base


Qualifications:

Bachelor’s degree in Computer Science or related field of studies
Strong analytical, troubleshooting, and problem solving skills
3 + years industry experience in software development, developer support or similar working discipline
Must have experience supporting RDMBS (Oracle/SQL Server) and/or systems administration (Solaris/Unix/Linux) , AND be willing to learn those areas that are outside of current comfort zone.
Good Knowledge of SQL.
Experience with end-user Reporting/ETL tools (BO, Cognos, Informatica, etc) and large-scale data warehousing experience is desirable.
Scripting skills in Bash, Perl, Python or MapReduce - a plus.
Experience with MPP databases (Teradata, Netezza/Greenplum) or PostgreSQL is a plus.
Must have prior experience in customer-facing consulting or support / call center skills.
Must have exceptional customer service and customer advocacy skills
Must have excellent verbal and written communication skills
Must be able to follow a process flow and handle calls per procedures.
Comfortable in administering Linux based environments
Great English communication skills
Comfortable facing customers and handling stress
Enthusiasm and great attitude that fit into startup culture
您需要登录后才可以回帖 登录 | 注册(开放注册)

本版积分规则

手机访问本页请
扫描左边二维码
         本网站声明
本网站所有内容为网友上传,若存在版权问题或是相关责任请联系站长!
站长联系QQ:12726567   myubbs.com
         站长微信
请扫描右边二维码
www.myubbs.com

小黑屋|手机版|Archiver|复旦大学论坛 ( 渝ICP备17000839号-8 )

GMT+8, 2024-3-29 22:06 , Processed in 0.050483 second(s), 15 queries .

Powered by 高考信息网 X3.3

© 2001-2013 大学排名

快速回复 返回顶部 返回列表