• 2 Posts
  • 143 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • beeb@lemm.eetoEspresso@infosec.pubWatery sour shot with channeling
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Maybe you’re using light roasted coffee, which usually requires a higher quality grinder to extract properly. You could switch to something darker which for sure will reduce the acidity levels, or get a cheap quality hand grinder. I’ve had a 1zpresso for a while which does the job, but I finally got a good electric one too because it’s only fun for a few weeks to grind by hand for espresso (it takes like 45 seconds)


  • I was curious so I tried to register with a Proton Pass alias which is @passmail.net and they also refuse those. I think they are afraid of services which allow to easily create multiple aliases, because people could create multiple accounts very easily (scammers maybe)? It’s a dumb rationale because it’s not much harder to create many Gmail accounts but that’s the only reason I can think of.












  • Note that there are many security concerns with this, notably the fact that there is no input validation on the id path segment which means you can get the content of any file (e.g. http://localhost:3000/src%2Fmain.rs). It’s also very easy to scrape the content of all the files because the IDs are easy to predict. When the server reboots, you will overwrite previously written files because the counter starts back at zero. Using a UUID would probably mostly solve both these issues.


  • Here’s a slightly more idiomatic version:

    use std::{
        fs,
        sync::atomic::{AtomicUsize, Ordering},
    };
    
    use axum::{extract::Path, http::StatusCode, routing::get, routing::post, Router};
    
    const MAX_FILE_SIZE: usize = 1024 * 1024 * 10;
    static FILE_COUNT: AtomicUsize = AtomicUsize::new(0);
    
    async fn handle(Path(id): Path<String>) -> (StatusCode, String) {
        match fs::read_to_string(id) {
            Ok(content) => (StatusCode::OK, content),
            Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()),
        }
    }
    
    async fn submit_handle(bytes: String) -> (StatusCode, String) {
        dbg!(&bytes);
        if bytes.len() > MAX_FILE_SIZE {
            // Don't store the file if it exceeds max size
            return (
                StatusCode::BAD_REQUEST,
                "ERROR: max size exceeded".to_string(),
            );
        }
        let path = FILE_COUNT.fetch_add(1, Ordering::SeqCst);
        if let Err(e) = fs::write(path.to_string(), bytes) {
            return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string());
        }
        (StatusCode::CREATED, format!("http://localhost:3000/%7Bpath%7D"))
    }
    
    #[tokio::main]
    async fn main() {
        let app = Router::new()
            .route("/", get(|| async { "Paste something in pastebin! use curl -X POST http://localhost:3000/submit -d 'this is some data'" }))
            .route("/{id}", get(handle))
            .route("/submit", post(submit_handle));
    
        let listener = tokio::net::TcpListener::bind("127.0.0.1:3000")
            .await
            .unwrap();
        axum::serve(listener, app).await.unwrap();
    }
    

    Note that there are no unwrap in the handlers which would absolutely want to avoid (it would crash your server). The endpoints now also return the correct HTTP code for each case. Some minor changes regarding creating the string values (use of format! and to_string() on string slices). Lemmy messes with the curly braces in the format! macro, there should be curly braces around the path variable name.





  • I can only guess the print orientation but it looks like curling to me. Basically on that side, the part cooling fan (or lack thereof) is making the plastic of overhangs curl more than on the opposite side which gives you this bad surface finish. Otherwise maybe a retraction issue but that would probably show in other places too (oozing).