asyncfunctionhandler(request){const identifier = request.getUserId()// or ip or anything else you wantconst ratelimit =await unkey.limit(identifier)if(!ratelimit.success){returnnewResponse("try again later",{ status:429})}// handle the request here}
Everything we do is built for scale and stability.
We built on some of the world’s most stable platforms (Planetscale and Cloudflare) and run an extensive test suite before and after every deployment.
Even so, we would be fools if we wouldn’t explain how you can put in safe guards along the way.
In case of severe network degredations or other unforseen events, you might want to put an upper bound on how long you are willing to wait for a response from unkey.
By default the SDK will reject a request if it hasn’t received a response from unkey within 5 seconds. You can tune this via the timeout config in the constructor (see below).
The SDK captures most errors and handles them on its own, but we also encourage you to add a onError handler to configure what happens in case something goes wrong.
Both fallback property of the timeout config and onError config are callback functions. They receive the original request identifier as one of their parameters, which you can use to determine whether to reject the request.
import{ Ratelimit }from"@unkey/ratelimit"// In this example we decide to let requests pass, in case something goes wrong.// But you can of course also reject them if you want.constfallback=(identifier:string)=>({ success:true, limit:0, reset:0, remaining:0})const unkey =newRatelimit({// ... standard stuff timeout:{ ms:3000,// only wait 3s at most before returning the fallback fallback},onError:(err, identifier)=>{console.error(`${identifier} - ${err.message}`)returnfallback(identifier)}})const{ success }=await unkey.limit(identifier)
Configure a timeout to prevent network issues from blocking your function for too long.
Disable it by setting timeout: false
Timeouts rely on Date.now(). In cloudflare workers time doesn’t progress unless there is some io happening, which means the timeout might not work as expected.
Other runtimes are working.
Expensive requests may use up more resources. You can specify a cost to the request and
we’ll deduct this many tokens in the current window. If there are not enough tokens left,
the request is denied.
Example:
You have a limit of 10 requests per second you already used 4 of them in the current
window.
Now a new request comes in with a higher cost:
const res =await rl.limit("identifier",{ cost:4})
The request passes and the current limit is now at 8
The same request happens again, but would not be rejected, because it would exceed the
limit in the current window: 8 + 4 > 10