I ran some further benchmarking tests to see why PsychicHttp was using more heap memory than my original build on AsyncWebServer.
What's important to point out here is that my specific AsyncWebServer and AsyncTCP and not the original GitHub versions but very optimized over the years. The stack for example in AsyncTcp is 8192.
I used this repo's benchmark folder for the testing, stripping out everything except adding multiple server.ons() like:
for (uint8_t i = 0; i < number_uris; i++) {
char path[10];
sprintf(path, "/api%d", i);
server.on(path, HTTP_GET, [](PsychicRequest * request) {
return request->reply(200, "text/plain", "Hello, World!");
});
}
When a URI handler is registered, I see
- PsychicHttp uses 426 bytes (23 of this is storing the endpoints in a std::list for future use)
- AsyncWebServer uses 280 bytes
That would explain with my 80 URI's PsychicHttp uses 12KB more heap memory.
The PsychicHttp code is clean and I don't see any areas for further optimization, so expecting this is all in the IDF.
I'll keep this open for now and report back any further findings.
I ran some further benchmarking tests to see why PsychicHttp was using more heap memory than my original build on AsyncWebServer.
What's important to point out here is that my specific AsyncWebServer and AsyncTCP and not the original GitHub versions but very optimized over the years. The stack for example in AsyncTcp is 8192.
I used this repo's
benchmarkfolder for the testing, stripping out everything except adding multiple server.ons() like:When a URI handler is registered, I see
That would explain with my 80 URI's PsychicHttp uses 12KB more heap memory.
The PsychicHttp code is clean and I don't see any areas for further optimization, so expecting this is all in the IDF.
I'll keep this open for now and report back any further findings.