Rate Limits and Call Limitations
Understanding the rate limits and call limitations of SpeedyNodes is crucial for optimizing your application's performance and ensuring uninterrupted service.
Request Rate Limits
Each SpeedyNodes plan comes with specific request rate limits:
Plan | Fullnode RPS | Archive Node RPS |
---|---|---|
Free Trial | 500 | N/A |
Tier 1 | 500 | 250 |
Tier 2 | 500 | 250 |
Tier 3 | 500 | 250 |
Private Node | Unlimited | Unlimited |
RPS = Requests Per Second
What happens when you exceed rate limits?
When you exceed your plan's rate limits:
- Temporary Queueing: Initially, excess requests may be queued briefly
- Rejection: If server load is high, some requests might be rejected with a 429 (Too Many Requests) response
- Potential Service Degradation: Consistently exceeding rate limits can lead to temporary IP throttling
Handling Heavy Calls
Some RPC methods are more resource-intensive than others. These "heavy calls" can consume significant bandwidth and processing power:
eth_getBlockReceipts
eth_getLogs
(with wide block ranges)debug_traceTransaction
trace_transaction
debug_traceBlockByNumber
trace_block
Best Practices for Heavy Calls
To optimize performance and avoid rate limiting when using heavy calls:
- Use Compression: Implement gzip compression for all requests (see example below)
- Batch Requests: Combine multiple requests into a single JSON-RPC batch call
- Paginate Requests: For
eth_getLogs
, break large block ranges into smaller chunks - Implement Caching: Cache responses for immutable blockchain data
- Use WebSockets for Subscriptions: For real-time data, use WebSocket subscriptions instead of polling
Using Gzip Compression
Using gzip compression can significantly reduce data transfer size, especially for heavy calls. This helps you stay within rate limits while improving response times.
Example with curl
curl -H "Accept-Encoding: gzip" \
-H "Content-Type: application/json" \
-X POST https://api.speedynodes.net/http/bsc-http?apikey=YOUR_API_KEY \
--data '{"jsonrpc":"2.0","method":"eth_getBlockReceipts","params":["0x2e60d60"],"id":1}' \
--compressed
Real-World Results
When retrieving block receipts for block 0x2e60d60: - Uncompressed size: 364,219 bytes - Gzipped size: 34,175 bytes - Bandwidth saved: 330,044 bytes (~90%)
Benefits of Compression
- Faster response times: Less data to transfer means quicker responses
- Reduced bandwidth usage: Save on data transfer costs
- Lower chance of hitting rate limits: Process more data within your existing limits
- Better application performance: Less waiting time improves user experience
Implementation in Different Languages
JavaScript/Node.js
const axios = require('axios');
const zlib = require('zlib');
async function compressedRequest() {
try {
const response = await axios.post(
'https://api.speedynodes.net/http/eth-http?apikey=YOUR_API_KEY',
{
jsonrpc: '2.0',
method: 'eth_getBlockReceipts',
params: ['0x2e60d60'],
id: 1
},
{
headers: {
'Content-Type': 'application/json',
'Accept-Encoding': 'gzip'
},
decompress: true // Axios handles decompression automatically
}
);
console.log(response.data);
} catch (error) {
console.error('Error:', error);
}
}
Python
import requests
url = 'https://api.speedynodes.net/http/eth-http?apikey=YOUR_API_KEY'
headers = {
'Content-Type': 'application/json',
'Accept-Encoding': 'gzip'
}
payload = {
'jsonrpc': '2.0',
'method': 'eth_getBlockReceipts',
'params': ['0x2e60d60'],
'id': 1
}
response = requests.post(url, json=payload, headers=headers)
# Requests automatically handles decompression
print(response.json())
Request Optimization Guide
Follow these best practices to optimize your requests and avoid hitting rate limits:
- Cache Immutable Data: Store historical block data, transaction receipts, and other unchangeable data
- Use Efficient Polling: Only poll for new data at reasonable intervals
- Implement Backoff Strategies: When rate limited, use exponential backoff before retrying
- Parallelize Carefully: Distribute requests evenly rather than sending bursts
- Filter Client-Side When Possible: Request only the data you need and process/filter it in your application
By implementing these strategies, you can maximize the performance of your applications while staying within your plan's rate limits.
API Usage Limits (per API key)
WebSocket (WSS)
- Concurrent connections: Up to 50 active WebSocket (WSS) connections per API key.
- Connection rate limit: Maximum 15 new WSS connections per second. Exceeding this will result in a 60-second cooldown, during which new connections are blocked.
- Message rate limit: Up to 500 messages per second across all active WSS connections. Traffic exceeding this by more than 10% may be throttled or dropped.
- Rejected message penalty: Repeatedly sending messages that are rejected by the node (e.g., malformed or unsupported) will trigger a 5-minute penalty, during which further messages may be blocked.
HTTPS (RPC)
- Request rate limit: Up to 500 requests per second per API key.
Staying within these limits ensures optimal performance and fair access for all users of the platform.