Industry

Dear Vulnerable: How may I contact you?

Published on Jun 21, 2024

I once heard the CISO of a large OEM refer to security researchers as “extortionists”. It was said in jest, but it revealed a common perception. Many see security researchers (aka white/gray-hat hackers) as opportunists who break things and then demand payment under the threat of public disclosure. While this is overly simplistic, the perception is not uncommon. 

Recently, at ESCAR USA, nearly every technical presenter discussed challenges of notifying vendors about vulnerabilities. If you follow cybersecurity, you know these stories. They typically go: A security researcher scans devices connected to the internet, discovers a vulnerability, attempts to disclose the vulnerability responsibly, but is met with threats by the legal department, never given a response, or eventually ignored. Finally the researcher publishes the findings, and soon after the vendor fixes the problem and attempts a PR campaign to assure everyone that they’re actually very security conscious. 

The truth is, most security researchers simply want to reach the right person and for the vendor to address the issue. While some researchers seek recognition or funding (their valuable efforts certainly deserve some reward), the majority aim to make a meaningful positive impact on society.

This article discusses how automotive suppliers and OEMs can establish effective communication with security researchers through a responsible disclosure program. It’s inexpensive, straightforward, and 100% necessary if brand reputation is valued.

What is a responsible disclosure program?

HackerOne defines it as a process where security researchers discover vulnerabilities and report them to the affected organization or vendor. The main goal, they say, is to improve security by addressing vulnerabilities before they can be exploited by malicious actors. That’s it. 

A good example of a supplier’s responsible disclosure program is Bosch’s vulnerability reporting page which even has a hall of fame, recognizing those who have found vulnerabilities! Another example is NVIDIA’s product security page which–like Bosch’s–includes a PGP key for sending secure emails.

Why do we need a responsible disclosure program?

Even if your automotive cybersecurity engineering processes make ISO 21434 look like a kid’s bedtime story, today’s consumers demand more from their vehicles and cybersecurity support will easily last 20 years or more (hopefully)! External researchers will find bugs and having a clear and accessible reporting mechanism reduces their frustration and ensures vulnerabilities are handled promptly.

The opening abstract to the Internet Engineering Task Force’s RFC 9116 lists another great point. It states that one result of improper reporting channels is often unreported vulnerabilities.

But we pentest our automotive products already!

Thomas Liedtke from Kugler Maag says there is an invisible expiration date on cybersecurity. He uses cryptography as an example because once it’s cracked (a fate inevitable to all cryptographic algorithms), everything dependent on that cryptography will be vulnerable.

Pentesting is an important part of product security, but the best pentesters today will have better tools and capabilities tomorrow and the most secure products today will inevitably have vulnerabilities in the future. Let’s also acknowledge that penetration testing is strictly limited in time due to cost and sometimes limited in scope due adjacent products and systems owned by third parties. Attackers in real life will not have any limiting scope boundary.

We don’t have the budget to pay researchers.

A responsible disclosure program does not guarantee payment for reported vulnerabilities; that’s what bug bounty programs are for. Holding a found vulnerability for ransom would not be protected by law (which does protect good-faith security researchers) nor would it be beneficial to the security researcher’s reputation.

That doesn’t mean you shouldn’t pay security researchers who find major vulnerabilities; it just means that it’s not expected nor is it part of responsible disclosure. For a great comparison between bug bounty programs and responsible disclosure, check this HackerOne article.

How do we get started?

What NOT to do

Here’s a great example of how not to do it. A security researcher found a vulnerability in a traffic controller system software, giving remote access and the control of traffic lights.

The affected company’s legal team sent a letter stating that their responsible disclosure program has very strict limitations (which is more in the realm of a bug bounty program). It even states, “we do not have the resources necessary to consider analysis of outdated items”, a comment I’m sure isn’t mentioned in their sales literature.

The security researcher posted the redacted letter on LinkedIn. Now, there are hundreds of responses and attention to the situation; most point to the ignorance and audacity of the company’s response.

The situation has gone from what could have been a simple, “Thanks! How can we replicate this?” to–what may be considered by some lawyers–a company trying to cover up knowledge of a major vulnerability in critical infrastructure.

What to do

Avoid pitfalls like restricting which systems can be tested or how they are accessed, as these can deter researchers from reporting. Remember, this is just the establishment of a communication path between external security researchers and your company.

It’s best to follow a standardized approach like RFC 9116, which advocates the use of a security.txt file on your webserver. This file, located under the “/.well-known” directory, guides researchers on how to report vulnerabilities without needing to navigate through complex websites and is discoverable by many automated tools.

The basic steps you’ll follow are:

Establish a security response team

  • Assemble a multidisciplinary team responsible for handling vulnerability disclosures. This team typically includes members from IT, legal, PR, and security.

Set up secure communication channels

  • Ensure that communication about vulnerabilities is secure and private. Key steps include creating a dedicated email address and configuring PGP encryption.

Develop a vulnerability management policy

  • Create a clear policy that outlines how vulnerabilities should be reported, assessed, and remediated. This policy also includes protocols for public disclosure.

Implement tracking and management tools

  • Use software tools, such as those provided by VSEC, to organize, track, and manage vulnerability reports efficiently.

Develop communication templates

  • Prepare standardized responses for acknowledgments, updates, resolutions, and special scenarios to ensure consistency and professionalism in communications.

Foster a positive culture towards security research

  • Promote a culture that values and appreciates the contributions of security researchers, emphasizing respect and gratitude for their efforts, regardless of their findings. Also, train your staff in automotive cybersecurity. VSEC Learn has some pre-built options (some free).

Public relations and legal preparedness

  • Prepare for public communication and legal considerations associated with disclosing vulnerabilities.

Publish the program to your website

  • Once internal structures and policies are firmly in place, publish the program prominently on the company’s website and include it in the /well-known/security.txt file to standardize access and enhance visibility.

Regular review and updates

  • Periodically review and update the policies, tools, and procedures to adapt to new security challenges and changes in technology.

For additional examples, simply have a look at what other companies are doing! The easiest way to check is to go to their main site and append “/.well-known/security.txt” to the end of the URL.

In Conclusion

If your automotive products are successful, they’ll eventually be tested by independent researchers. It’s inevitable. Establish a basic process and strategy to handle communications with the people honest enough to notify you instead of selling it on a zero-day market. If you don’t have a plan to communicate with them, then the news reporters will certainly find a way to contact you.

Most researchers out there simply want to share what they found and make your products safer! They would appreciate a thank you. They would enjoy some credit. And they would love some cash if it’s a major vulnerability.

Read More

Explore more automotive cybersecurity insights from our experts. Discover best practices, case studies, and emerging trends to strengthen your organization's security posture.

Rule 791D: The Ban on Chinese & Russian Tech in Connected & Autonomous Vehicle Systems

The Department of Commerce rule banning Chinese and Russian software and hardware in connected vehicles is live and in effect. Automakers and their supply chains have until model year 2027 to comply.

Learn More
Industry
Assessing Automotive Cybersecurity Management System (CSMS) Compliance

Discover strategies to protect automotive supply chains from cybersecurity threats. Learn how to identify vulnerabilities and implement effective security measures across the vehicle ecosystem.

Learn More
Industry
Cybersecurity Assurance Levels in Product Development Lifecycle

Understand how security assurance levels guide protection efforts throughout vehicle development. Learn to determine appropriate security controls based on risk assessment.

Learn More
Industry
Fuzz Testing vs Penetration Testing in Automotive Cybersecurity

Understand the differences between fuzz testing and penetration testing for vehicles. Learn when to use each approach and how they complement your security strategy.

Learn More

Try Block Harbor Today

Start protecting your vehicles with the same platform the world’s best hackers and defenders use.