Just save this as karma.py and run it with Python 3.6 or higher.

import requests
import math

INSTANCE_URL = "https://feddit.de"
TARGET_USER = "ENTER_YOUR_USERNAME_HERE"

LIMIT_PER_PAGE = 50

res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}").json()

totalPostScore = 0
totalCommentScore = 0
page = 1
while len(res["posts"])+len(res["comments"]) > 0:
	totalPostScore += sum([ x["counts"]["score"] for x in res["posts"] ])
	totalCommentScore += sum([ x["counts"]["score"] for x in res["comments"] ])
	
	page += 1
	res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}&page={page}").json()

print("Post karma:    ", totalPostScore)
print("Comment karma: ", totalCommentScore)
print("Total karma:   ", totalPostScore+totalCommentScore)
  • Pete Hahnloser@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I’m getting back into Python for unrelated reasons, and last I was using it, JSON wasn’t on my radar yet.

    I’m curious about the .json() method here, which seems to be exposing posts et al. for further manipulation without parsing. Is this really as simple as it appears?

      • Pete Hahnloser@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Thanks for the link! This looks like an absurdly powerful library for HTTP needs and output manipulation from the perspective of a scraping neophyte.

    • Square Singer@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yes, it totally is that easy. At first I used an API wrapper library, but then I checked out the source and there is really no need for it since requests already handles basically everything. .json() takes the response body of the request and runs it through json.decode() and thus spits out a nice Python dict/list structure.

      It is absurdly simple and powerful.