Shape the future of AI/ML hardware at Google by driving TPU technology and collaborating with top engineers.
Your role
Key responsibilities as follows:
- Work on ML workload characterization and benchmarking.
- Propose capabilities and optimizations for next-generation TPUs.
- Develop architectural and microarchitectural power/performance models.
- Evaluate quantitative and qualitative performance and power analysis.
- Collaborate on power, performance, area (PPA) trade-off analysis.
- Partner with hardware design, software, compiler, ML model, and research teams for hardware/software codesign.
- Develop architecture specifications for AI/ML roadmap.
About you
The ideal candidate will have:
- A PhD in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering, or a related technical field, or equivalent practical experience.
- Experience with accelerator architectures and data center workloads.
- Proficiency in programming languages such as C++ and Python.
- Experience with performance modeling tools.
- Knowledge of arithmetic units, bus architectures, accelerators, or memory hierarchies.
- Understanding of high performance and low power design techniques.
Compensation & benefits
Competitive salary with bonuses, comprehensive healthcare, and various perks.
Training & development
Opportunities for professional development and mentorship programs to enhance skills and career growth.
Career progression
Potential for advancement in AI/ML hardware engineering roles, with expected growth in leadership and technical expertise.
How to apply
Submit your application by completing the required form and attaching your resume and cover letter. Ensure all documents are up-to-date and reflect relevant experience.
Report this job