Showing posts with label api. Show all posts
Showing posts with label api. Show all posts

Tuesday, December 26, 2023

OWASP DependencyCheck returns 403 Forbidden accessing NVD API using API key

Introduction

Recently, the NVD (National Vulnerability Database) which the Owasp dependency check plugin uses to get its data from to check for vulnerabilities, has introduced the use of an API key. That's for them to better control access and throttling - imagine how many companies and organizations are using that API, each time a dependency check build is performed. Especially those that don't cache the NVD database and at each run retrieve it again. And be aware: "... previous versions of dependency-check utilize the NVD data feeds which will be deprecated on Dec 15th, 2023. Versions earlier then 9.0.0 are no longer supported and could fail to work after Dec 15th, 2023."

But this introduction doesn't go without some hiccups. For example it is possible to still get HTTP 403 Forbidden responses, even while you have a valid key. Here's my research while trying to fix it.

Setup:

  • Gradle 7.x
  • Dependency Check v9.0.6 (issue applies at least for versions > 8.4.3)
  • Configuration:

    dependencyCheck {
        failBuildOnCVSS = 6
        failOnError = true
        suppressionFile = '/bamboo/owasp/suppressions.xml'
        nvd.apiKey = '
    <yourkey>'
    }

    You can also set it dynamically via an environment variable like this:

    dependencyCheck {
      nvd {
        apiKey = System.getProperty("ENV_NVD_API_KEY")
      }
    }

  • Via commandline you can invoke it like this:

    ./gradlew dependencyCheckAggregate -DENV_NVD_API_KEY=<yourkey>

 

Solution

First you should check if your API key is valid by execution this command:

curl -H "Accept: application/json" -H "apiKey: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -v https://services.nvd.nist.gov/rest/json/cves/2.0\?cpeName\=cpe:2.3:o:microsoft:windows_10:1607:\*:\*:\*:\*:\*:\*:\*
 

(where xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx is your NVD API key)

That should return JSON (and not a 404). Now you know your API key is valid.
 

Some have some success with setting the delay longer:

    nvd {
        apiKey = System.getProperty("ENV_NVD_API_KEY")
        delay = 6000 // milliseconds, default is 2000 with API key, 8000 without
    }

Commandline version:

--nvdDelay 6000
 

You can also increase the validForHours option, but that doesn't work if during you construct completely new Docker containers each build - you lose that history.

All NVD options you can pass to DependencyCheck are found here.

But currently (27 December 2023) all the above efforts don't always fix the problem of the 403. Sometimes it works for a while, but then not again. If you  build many projects in your company at about the same time, you still have a chance of getting throttled of course.

The best solution is to create a local cache so you are less dependent on NVID API calls (and thus the throttling).

 

Other causes mentioned

  • Being behind a proxy with your whole company, see https://github.com/jeremylong/DependencyCheck/issues/6127 and "If you have multiple builds happening at the same time - using the same API key you could hit the NVD rate limiting threshold. Ideally, in an environment with multiple builds you would implement some sort of caching strategy". See: https://github.com/jeremylong/DependencyCheck/issues/6195
  • Use --data argument to control cache location.
  • It appears the NVD has put a temporary block on all requests that use a virtualMatchString of "cpe:2.3:a" - outstanding issue.


Misc

 

Saturday, July 27, 2019

PACT Consumer Driven Contract Testing: how to allow any body in the response


Consumer Driven Contract testing is a way to ensure that services (such as an API provider and a client) can communicate with each other. Without contract testing, the only way to know that services can communicate is by using expensive and brittle integration tests.
PACT is a contract testing tool.  


For response matching you want to be as loose as possible with the matching for the response (will_respond_with(...)) though. This stops the tests being brittle on the provider side.
 Most of the time you don't care about the exact values of the (JSON or XML) response, but you do care about the types of the values, e.g a string or a number.
In that case you'll be using 'type matching'.

Sadly for JVM matchers there's no such handy method that with one invocation makes sure only types are validated, as is for example with Ruby/Groovy/Javascript/Node: Pact::SomethingLike.
Not much documentation on the Java/JVM matchers can be found at the official Pact site itself. Mostly the examples are for Ruby/Groovy/Javascript/Node.

For  the Java and Virtual Machine integration you'd use pact-jvm with matchers like .stringType() for the body using the PACT DSL or this lambda extension.

But hard to find was how to specify allowing either an empty body in the response or any data in the body in the response, while the consumer does not require a body at all (or in other words: is fine with an empty body or any fields in it).
Solution: for that you need to completely omit the .body() in the consumer contract definition.
If you specify a .body(new PactDslJsonBody()), the contract generated matcher will specify "body" : {}, and therefor requiring an empty body. And if then the provider (test) generates one or more fields in the response, you'll see this as the message in the failing test:


Expected an empty Map but received Map(..... fields added by provider ...)


So full example which accepts an empty body or a body with elements in it:

.consumer("Some Consumer")
.hasPactWith("Some Provider")
.given("a certain state on the provider")
    .uponReceiving("a request for something")
        .path("/hello")
        .method("POST")
.toPact()


Sunday, April 11, 2010

Best of this Week Summary 5 April - 11 April 2010

Sunday, December 13, 2009

Best of this Week Summary 07 December - 13 December 2009

Sunday, October 25, 2009

Best of this Week Summary 19 October - 25 October 2009

Sunday, September 6, 2009

Best of this Week Summary 24 August - 06 September 2009

  • Great introduction to Solr, a search server (and more). The article describes how to get it running, send it some documents to index and how to search those documents in a controlled way.

  • More details on Google Wave: the draft specification for the Google Wave Federation Protocol and the Java source code for the Google Wave Federation Prototype Server

  • An extensive comparison of Spring and Seam. And, recently added chapter 5, comparing them with Wicket 1.3.6. Too long for you to read? Then at least read the conclusion :)
    Talking about Wicket, here are some
    experiences written down on migrating an existing Wicket application to the new 1.4 version.

  • This article provides a short overview on the basics of RESTful HTTP and discusses typical issues that developers face when they design RESTful HTTP applications. It shows how to apply the REST architecture style in practice. It describes commonly used approaches to name URIs, discusses how to interact with resources through the Uniform interface, when to use PUT or POST and how to support non-CRUD operations.
    Related to that, here's 8 great tips/lessons learned for creating an API.

Sunday, December 28, 2008

Best of this Week Summary 16 December - 28 December 2008

  • Great insight on Second Life's architecture. For example: "A physical server (1 CPU) is responsible for about 16 acres of land and it is connected to neighboring ones which are each responsible for another 16 acres. The server is responsible for the objects existing in its area, the scripts running, the users logged in and standing in its area". Presentation is one hour in total.

  • There's more to REST than meets the eye. And many REST APIs are not really as REST as Roy Fielding defines it. Media type design is an important item that was not in his original REST dissertation. Some interpretation of what Roy actually means can be found here.

  • JanRain (known for their OpenID libraries) have created a nice widget named RPX that allows you to integrate authentication within your existing site in an easy and user-intuitive way. I really like the clear, easy and non-intrusive way the possibilities are shown. For real novice users the redirecting to and from the authenticating sites might still be a little bit confusing though. Supported protocols are: OpenID 1.x/2.0, Facebook Connect, MySpaceID and Google. Below is a screenshot of what the registration part looks like:

    A couple of example sites where this is already implemented can be found here. And some more on the possibilities here.
    Note that from the technical overview you can see that the RPX server sits in between. That is the only disadvantage of this solution: that you are dependent on an intermediate server.

  • The W3 Consortium has released a webpage mobile-friendliness checker. The tests it performs can be found at the mobileOK Basic Tests 1.0 specification. Other validators you might know from them are the feed validator, XML Schema Validator, CSS validator or Markup validator. Running these very succesful services without any advertising is costing a lot of money. Therefore you can now donate here for support. If you compute how much time those validators have saved you, donating any small percentage of that will already help W3C keep these validators running.

Saturday, November 15, 2008

Best of this Week Summary 04 November - 16 November 2008

Saturday, October 4, 2008

Best of this Week Summary 29 September - 05 October 2008

Saturday, September 27, 2008

Best of this Week Summary 22 September - 28 September 2008

Sunday, August 3, 2008

The iPhone Push Notification Service: what to watch out for

This week Apple introduced its Push Notification Service API to a restricted set of developers.
What's so special about this? The new iphone 3G does not allow developers create applications that can run in the background. An example would be a chat application for Facebook that sounds a 'ping' when one of your contacts goes online; this chat application would need to be running all the time on your iphone and now and then ask the server "anybody new?". Or it just has to sit listening for a message from the server telling that one of your contacts logged on.
This push notification service sits between the 3rd party server (e.g the facebook application server) and the iphone device. See here for a clear basic architecture picture. It allows any 3rd party server to contact this Apple(!) service, which in turn then contacts the related iphone device. This means Apple will have all the knowledge of all the 3rd party applications and their communication with the connected iphones!! Scary. This sounds sooo Microsoft.
Here's another post dedicated to this too.

Saturday, April 12, 2008

Best of this Week Summary 7 April - 12 April 2008

Sunday, February 10, 2008

Best of this Week Summary 28 January - 10 February 2008

  • Interesting idea mentioned in this post: the very basic site inursite.com validates your markup daily and you get sent the result via email or RSS. Of course during the building your site should already validate, but this site can help for Continous Integration of your front-end.

  • Great overview of Javascript/AJAX performance issues in all major browsers (except Opera). You can use this information to know where to focus your Javascript optimizations on.

  • This week several BIG names joined OpenID: Google, Verisign and IBM.

  • Google's just released Social Graph API. It tries to find public relationships between people's accounts.

Sunday, November 11, 2007

Best of this Week Summary 6 Nov - 11 Nov 2007

  • This is a good blog to get you started on JavaFX and related technologies. See this post on what's been covered until now.

  • Madly interesting is this technical posting about Amazon's Dynamo, which is a their internal distributed storage system in which the data is stored and looked up via a key, with a put() and get() interface. Sounds quite similar to the put() and get() for a Hashtable in Java right? ;-) Actually Dynamo is built in Java, so I guess that's no coincidence! The posting gives you quite some details on Amazon's internal infrastructure, and introduces interesting new terms like that it is an "eventually consistent storage system". What is also cool is that each of Amazon's internal applications can setup their own SLA with Dynamo. This SLA defines the amount of delays and data discrepancy the application will tolerate from Dynamo. The fact that Amazon is opening up its services (with S3 and EC2), makes it a huge differentiator from companies like Google and Microsoft, which don't open up their systems (some Google GFS info you can find here). Related to this the new term that is being coined recently: HaaS (Hardware as a Service). No time to read the whole paper? A summary you can find here. Compare it with Hadoop and CouchDB.

  • Related to my last week's post about OpenSocial, this week the (very) alpha version 0.5 of the Container API has been released.

  • Note: I turned on moderation for comments this week because of a big spamming "effort"... Thanks whoever you are...

Monday, November 5, 2007

OpenSocial: the harder technical outstanding questions

This week anybody following technical news cannot have missed the announcement of the Google OpenSocial initiative.



Here is a short introduction. A bunch of live examples can be found here.

Basically it is a widgets (gadgets) API specification, built in Javascript and XML that anybody can plugin on a social network page (like MySpace, Plaxo) to show relevant social information (e.g who are my friends in this social network).
Most posts were positive and only looked at potential positive uses. It took about half a day before the first more critical posts showed up.
With this post I'd like to provide you the current outstanding technical questions and issues with the OpenSocial API. First I'll list the posts I found until now, then my own outstanding questions.

Note: don't misunderstand me, I like the initiative, trying to get the so-called social graph standardized. But a careful examination is definitely relevant if you want to introduce it on your site.

  • Good points (not only technical) what is not yet so good about OpenSocial

  • This a good pretty good technical overview and points of critique which the above post mentions. Btw: I disagree with this statement definitly: "if REST APIs are so simple why do developers feel the need to hide them behind object models?". Object model abstraction can still be needed/desired to acquire the best level of abstraction.

  • Note that the container API itself is not there yet...

  • In the FAQ there is the question: "Can OpenSocial apps interact with other websites?". The answer is "Yes, social apps have the ability to fully interact with outside 3rd party applications using standard web protocols." But how does it then avoid the same origin policy for Javascript? Would the calls go via Google? How does it work with Google Gadgets (iGoogle), where you can specify any feed URL? My guess for this last one: via a Google proxy as provided in the Google Feed API... At the moment there exists no OpenSocial gadget that implements accessing data from multiple social networks.

  • Why are technologies like Microformats and APML used? My guess for now: those "standards" are too much in an initial phase. An API with Javascript and XML are based on standardized technologies and immediately available.

  • Really pay attention to the authentication mechanism described for the People Data , which all go through a Google account (or you can always use an email-address and password). Check the first details on this issue here.


Update: here's some sort of answer to some of the above questions.

Update: Google released version 0.5 of the OpenSocial Service Provider API (the container API). Very alpha.

Update: Here's a Javascript library that "solves" the same-origin-policy.