
Generative AI Associate - Red Teaming Specialist
- Remote
- Calgary, Alberta, Canada
- Vancouver, British Columbia, Canada
- Winnipeg, Manitoba, Canada
- Moncton, New Brunswick, Canada
- St. John's, Newfoundland and Labrador, Canada
- Halifax, Nova Scotia, Canada
- Yellowknife, Northwest Territories, Canada
- Charlottetown, Prince Edward Island, Canada
- Saskatoon, Saskatchewan, Canada
- Whitehorse, Yukon, Canada
- Remote, Ontario, Canada
+10 more- CA$20 - CA$20 per hour
- Innodata Services LLC
Job description
Job Title: Generative AI Associate - Red Teaming (English)
Location: Fully Remote within the Canada (excluding Quebec)
Employment Type: Full Time, 1 Month Contract Role (Full Time, 40 hours weekly, for 4 weeks.)
Who we are:
Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are an AI technology solutions provider-of-choice for 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML/AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we’re helping usher in the promise of AI. Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.
Our global workforce includes over 5,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany. We’re poised for a period of explosive growth over the next few years.
About the Role:
At Innodata, we’re working with the world’s largest technology companies on the next generation of generative AI and large language models (LLMs). We’re looking for smart, savvy, and curious Red Teaming Specialists to join our team.
This is the role that writers and hackers dream about: you’ll be challenging the next generation of LLMs to ensure their robustness and reliability. We’re testing generative AI to think critically and act safely, not just to generate content.
This isn’t just a job: it’s a once-in-a-lifetime opportunity to work on the frontlines of AI safety and security. There’s nothing more cutting-edge than this. Joining us means becoming an integral member of a global team dedicated to identifying vulnerabilities and improving the resilience of AI systems. You’ll be creatively crafting scenarios and prompts to test the limits of AI behavior, uncovering potential weaknesses and ensuring robust safeguards. You’ll be shaping the future of secure AI-powered platforms, pushing the boundaries of what’s possible. Keen to learn more?
What You’ll Be Doing:
As a Red Teaming Specialist on our AI Large Language Models (LLMs) team, you will be joining a truly global team of subject matter experts across a wide variety of disciplines and will be entrusted with a range of responsibilities. We’re seeking self-motivated, clever, and creative specialists who can handle the speed required to be on the frontlines of AI security. In return, we’ll be training you in cutting-edge methods of identifying and addressing vulnerabilities in generative AI. Below are some responsibilities and tasks of our Red Teaming Specialist role:
Complete extensive training on AI/ML, LLMs, Red Teaming, and jailbreaking, as well as specific project guidelines and requirements
Craft clever and sneaky prompts to attempt to bypass the filters and guardrails on LLMs, targeting specific vulnerabilities defined by our clients
Collaborating closely with language specialists, team leads, and QA leads to produce the best possible work
Assist our data scientists to conduct automated model attacks
Adapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirements
Keep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with them
Hit productivity targets, including for number of prompts written and average handling time per prompt
Job requirements
Minimum Qualifications:
A Bachelor’s degree or Associates degree with minimum 1 year of relevant industry experience. Advanced degrees are strongly preferred (Master’s or PhD)
Professional or Expert level proficiency (C1/C2) in English.
Strong understanding of grammar, syntax, and semantics – knowing what "proper” English rules are, as well as when to violate them to better test AI responses
Please note: As a Red Teaming Specialist, you’ll push the boundaries of large language models and seek to expose their vulnerabilities. In this work, you may be dealing with material that is toxic or NSFW. Innodata is committed to the health of its workforce and so provides wellness resources and mental health support.
Hourly Range: $20 an hour CAD
Hourly rates at Innodata vary depending on a wide array of factors, which may include but are not limited to the role, skill set, educational background and geographic location.
We are an equal opportunity employer committed to fostering an inclusive, respectful, and diverse workplace. We welcome and encourage applications from individuals of all backgrounds and are dedicated to employment equity and building a team that reflects the diverse communities in which we live and operate.
In accordance with the Accessibility for Ontarians with Disabilities Act (AODA), we are committed to providing accommodations throughout the recruitment and selection process. If you require an accommodation, please let us know, and we will work with you to meet your needs.
Please be aware of recruitment scams involving individuals or organizations falsely claiming to represent employers. Innodata will never ask for payment, banking details, or sensitive personal information during the application process. To learn more on how to recognize job scams, please visit the Federal Trade Commission’s guide at https://consumer.ftc.gov/articles/job-scams.
If you believe you’ve been targeted by a recruitment scam, please report it to Innodata at verifyjoboffer@innodata.com and consider reporting it to the FTC at ReportFraud.ftc.gov.
or
All done!
Your application has been successfully submitted!