With ever the constant updates to popular and important apps and websites, how do the developers ensure the update preserves the user and site’s original database, considering the possibility that the updates might contain data-corruptive bugs or exploits for users?

This question is not addressing the frequent database leaks, since they only involve release of read only data and exploits only exist in the form of accessing user accounts and data, not altering the site’s database at large itself.

Even if there is a backup for the database, undoing changes that affect millions or people negatively would create a ton of uproar, especially if the site is based on real time interactions, like stock broker apps, social media apps and instant messengers. However I have personally never heard of such incidents.

Do companies at such large scales have extreme QA in place, depend on user feedback and reporting, or just have been lucky they haven’t been exploited yet? Or am I completely wrong and these incidents do occur?

Keep in mind that I am an amateur in this domain. I have worked with MySQL databases for educational purposes and personal projects but I found the state of databases very fragile, like the ability to nuke the entire database with just 2-3 words. This fact made me come up with this question.

  • flatbield@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I would also say, that if you have not heard of exploits you have not been listening. Ransomware has been in the news all the time. All security is porous and shit happens too. This is why a lot of companies have gone crazy nuts about IT security and other processes. There is a good reason.

    I worked for a company that was migrating an enterprise system. They did the rollout by region, but at times regions could not ship product for a month or more. I saw from outside, but one has to wonder who thought that was acceptable or reasonable.

    Bottom line in IT if it can happen it will. If it has not been tested assume it does not work, etc. Assume the worst because there are so many more way things can go wrong then work correctly.

    • Datman2020@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      This was entirely opposite of how I expected the state of product updates to be. I didn’t know the devs expected for the worse to happen. Thank you for providing me with this insight.

  • key@lemmy.keychat.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It depends. A typical upgrade to software doesn’t touch the database at all. The database doesn’t change if you don’t change it. Some updates will do migrations of databases to handle schema changes. There’s specific technologies that aid in this process such as liquibase. Updates that are especially potentially volatile will sometimes involve making backups of the database just in case. But any sane company will have nightly backups if not better with a couple fallback options in case of issues.

    Good database management will involve least access principals so nobody has access to those most dangerous commands and few people/systems have access to the merely quite dangerous commands.

    The bigger you are, the more process and protections you put in place (ideally) to decrease the chance of such incidents and more importantly, have a Playbook defining what to do to quickly recover when they do happen. There will end up being one to seversl classes of employees between devs and the critical prod databases in big companies. Companies will pay millions in wages and vendor contracts for systems to provide observability and protection that covers database related incidents. And those protections will also include audits and change control processes for any code that runs.