I am a little angry with these kind of write-ups where they point a lot at others (bad actors, malicious traffic) instead of admitting a problem in the software.
I cannot imagine that you piss of somebody that much that this actor will only focus on one piece of your platform (again and again)
I've done many large database migrations which involved adding or removing indexes which included full table locking and they often lead to (accounted) downtime.
---
My honest take on this...
Most probably their regular user signup just wasn't protected well enough with captcha or CSRF and created some background processes that piled up on tables lacking some indexes which finally became a visible bottleneck.
They assumed bad traffic, so took the wrong turn. (WAF)
After they "implemented a fix" which should add an index to a large table that took down the database server because of table locking.
This also doesn't recover after an abort.
Next up, UPGRADE OUR DB SERVER, and db table cache is gone, hence disabling user logging to allow to get a warm start.
Didn't work, DOWNGRADE OUR DB SERVER, same thing repeats.
I once caused a 10Gbit DDoS of our own services with a typo in a code library. The attack turned out not be malicious at all! Once could say that it was, then, no longer an attack — but it was until we understood it :)
Reading the page, I'm not sure what Webflow could have done better here.
DDoS attacks are hard to mitigate and it looks like they are throwing money at the problem to scale up, which is the best you can do to survive future attacks.
I am a little angry with these kind of write-ups where they point a lot at others (bad actors, malicious traffic) instead of admitting a problem in the software. I cannot imagine that you piss of somebody that much that this actor will only focus on one piece of your platform (again and again)
I've done many large database migrations which involved adding or removing indexes which included full table locking and they often lead to (accounted) downtime.
---
My honest take on this...
Most probably their regular user signup just wasn't protected well enough with captcha or CSRF and created some background processes that piled up on tables lacking some indexes which finally became a visible bottleneck.
They assumed bad traffic, so took the wrong turn. (WAF)
After they "implemented a fix" which should add an index to a large table that took down the database server because of table locking. This also doesn't recover after an abort.
Next up, UPGRADE OUR DB SERVER, and db table cache is gone, hence disabling user logging to allow to get a warm start. Didn't work, DOWNGRADE OUR DB SERVER, same thing repeats.
---
No bad actors involved.
Is there any attacker who isn’t malicious?
I once caused a 10Gbit DDoS of our own services with a typo in a code library. The attack turned out not be malicious at all! Once could say that it was, then, no longer an attack — but it was until we understood it :)
Reading the page, I'm not sure what Webflow could have done better here.
DDoS attacks are hard to mitigate and it looks like they are throwing money at the problem to scale up, which is the best you can do to survive future attacks.