• affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    from the article:

    Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.

    i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt

      • affiliate@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 months ago

        i would probably word it as something like:

        Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.

        in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:

        • robots.txt is fundamentally a list of rules, not a single line of code
        • robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
        • it’s not legally binding, but it is still customary for bots to follow it

        i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.

      • ma1w4re@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 months ago

        List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.