Exactly. And this is why I refuse to work at companies like yours.
Then good luck to you?
But you seemed to have missed the point. The images I share, are an SCA (Security Configuration Assessment)ā¦ Theyāre a āminimum configurationā standard. Not a standard image. Though that SCA does live as standard images in our virtualized environments for certain OSes. Iām sure if we had more physical devices out in the company-land weād need to standardize more for images that get pushed out to themā¦ But we donāt have enough assets out of our hands to warrant that kind of streamline.
Iām a huge proponent of Linux. Just talk to the IT people in your orgā¦ many of them will get you a way to get off the windows boat. But it has to still be done in a way that meets all the security audits/policies/whatever that the company must adhere to.
I literally go out of my way to get answers for folks who want off the windows boat. Go have a big boy adult conversation with your IT team. Iām linux only at home (to the point where my kids have NEVER used windows[especially these days with schools being chromium only]. And yes, they use arch[insert meme]), Iāve converted a bunch of our infra to linux that was historically windows for this company. If anyone wanted linux, Iād get you something that youāre happy with that met our policies. Your are outright limiting yourself to workplaces that donāt do any work in any auditable/certified field. And that seems very very short-sighted, and a quick way to limit your income in many cases.
But you do you. My companyās dev team is perfectly happy. I would know, since I also do some dev work when time allows and work with them directly, regularly. Hell most of them donāt even do work on their work issued machines at all (to the point that weāve stopped issuing a lot of them at their request) as we have web-based VDI stuff where everything happens directly on our servers. Much easier to compile something on a machine that has scalable processors basically at a whim (nothing like 128 server cores to blast through a compile) all of those images meet our specs as far as policy goes. But if youāre looking to be that uppity annoying user, then I am also glad that you donāt work in my company. With someone like you, would be when we lose our certification(s) during the next audit period or worseā¦ lose consumer data. You know what happens when those things happen? The company dies and you and I both donāt have jobs anymore. Though I suspect that you as the user who didnāt want to work with IT would have a harder time getting hired again (especially in my industry) than I would for fighting to keep the companies assets secureā¦ but that one damn user (and their managers) just went rogue and refused to follow policies and restrictions put in placeā¦
Iām a software engineer, and my department head told IT we needed Macs not because we actually do, but because they donāt support Macs so weād be able to use the stock OS.
No you donāt. There is no tool that is Mac-only that you would need where there is no alternative. This need is a preference or more commonly referenced as a āwantāā¦ not a need. Especially modern M* macs. If you walked up to me and told me you need somethingā¦ and canāt actually quantify why or how that need supersedes current policy I would also tell you no. An exception to policy needs to outweigh the cost of risk by a significant margin. A good IT team will give you answers that meet your needs and the companyās needs, but companyās needs come first.
either the company trusts me to follow best practices, or I look elsewhere
So if I gave you a link to a remote VM, and you set it up the way you want. Then I come in after the fact and check it against our SCAā¦ youād score even close to a reasonable score? The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following ābest practicesā. No single person can keep track and secure systems these days. Itās just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour. The company pays me to secure their stuff. Not you. You wasting your time doing that task inefficiently and incorrectly is a waste of company resources as well. āBest practiceā would be the security folks handle the security of the company no?
Iām linux only at home (to the point where my kids have NEVER used windows
Same.
I honestly donāt think this issue has anything to with our staff, but our corporate policies. Users canāt even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).
My issue has less to do with Windows (unacceptable for other reasons), but with lack of admin access. Our IT team eventually decided to have us install some monitoring software, which we all did while preserving root access on our devices.
I would honestly prefer our corporate laptops (ThinkPads) over Apple laptops, but weāre not allowed to install Linux on them and have root access because corporate wants control (my words, not theirs).
web-based VDI stuff where everything happens directly on our servers
I donāt know your setup, but I probably wouldnāt like that, because it feels like solving the wrong problem. If compile times are a significant issue, you probably need to optimize your architecture because your app is probably a monolithic monster.
I like cloud build servers for deployment, but I hate debugging build and runtime issues remotely. Thereās always something that remote system is missing that I need, and I donāt want to wait a day or two for it to get through the ticket system.
lose consumer data
Customer data shouldnāt be on dev machines. Devs shouldnāt even have access to customer data. You could compromise every dev machine in our office and you wouldnāt get any customer data.
The only people with that access are our devOPs team, and they have checks in place to prevent issues. If I want something from prod to debug an issue, I ask devOPs, who gets the request cleared by someone else before complying.
I totally get the reason for security procedure, and I have no issue with that. My issue is that I need to control my operating system. Maybe I need to Wireshark some packets, or create a bridge network connection, or do something else no sane IT professional would expect the average user to need to do, and I really donāt want to deal with submitting a ticket and waiting a couple days every time I need to do something.
There is no tool that is Mac-only that you would need where there is no alternative
Exactly, but thatās what we had to tell IT so we wouldnāt have to use the standard image, which is super locked down and a giant pain when doing anything outside the Microsoft ecosystem. I honestly hate macOS, but if I squint a bit, I can make it almost make it feel like my home Linux system. I wouldāve fought with IT a bit more, but thatās not what my boss ended up doing.
We run our backend on Linux, and our customers exclusively use Windows, so thereās zero reason for us to use macOS (well, except our iOS builds, but we have an outside team that does most of that). Linux would make a ton more sense (with Windows in a VM), but the company doesnāt allow installing āunofficialā operating systems, and I guess my boss didnāt want to deal with the limited selection of Linux laptops. Iām even willing to buy my own machine if that would be allowed (itās not, and I respect that).
If our IT was more flexible, weād probably be running Windows (and I wouldnāt be working there), but we went with macOS. Maybe we couldāve gotten Linux if we had a rockstar heading the dept, but our IT infra is heavy on Windows, so weāre pretty much the only group doing something different (corporate loves our product though, and weāre obsoleting other in-house tools).
The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following ābest practicesā.
No, Iāve just had really bad experiences with IT groups, to the point where I just nope out if something seems like a potential nightmare. If infra is largely Microsoft, the standard issue hardware runs Windows, and the software group that Iām interviewing with doesnāt have special exceptions, I have to assume itās the bog standard āIT groups calls the shotsā environment, and Iāll nope right on out. For me, itās less about the pay and more about being able to actually do my job, and Iāll take a pay cut to not have to deal with a crappy IT dept.
Iām sure there are good IT depts out there (and maybe thatās yours), but itās nearly impossible to tell the good from the bad when interviewing a company. So I avoid anything that smells off.
tās just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour.
Yet, Iāve pointed out several security issues in our infra managed by a professional IT team, from zero days that could impact us to woefully outdated infra. Iām not perfect and I donāt believe anyone is, but just being in the IT position doesnāt mean youāre automatically better at keeping up with security patches.
Iām usually the first to update on our team (Iām a lead, so I want to catch incompatibilities before they halt development), and I work closely with our internal IT team to stay updated. In fact, just Friday I asked about some potential concerns, and it turns out we were running into resource limits on devices hosted on Linux OSes that were already out of the security update window. So two issues caught by curiosity about something I saw in the code as it relates to infra I canāt (and shouldnāt) access. I donāt blame our team (theyāre always understaffed IMO), but my point here is that security should be everyoneās concern, not just a team who locks down your device so you canāt screw the things up.
If everything is exactly the same, everything will be compromised at the same time, so some variation (within certain controls) is a good thing IMO. Yet top down standardization makes that implausible.
The company pays me to secure their stuff. Not you.
The company also pays me to write secure, reliable software, and I canāt do that effectively if I canāt install the tools I need.
Yes, IT professionals have their place, and IMO thatās on the infra side, not the end-user machine side. So set up the WiFi to block direct access between machines, segment the network using VLANs to keep resources limited to teams that need them, put a boundary between prod (Ops) and devs to contain problems, etc. But donāt take away my root access. Iām happy to enable a system report to be sent to IT so they can check package versions and open ports and whatnot, but let me configure my own machine.
I get your points. But we simply wouldnāt get along at all. Even though Iād be able to provide every tool you could possibly want in a secure, policy meeting way, and probably long before you actually ever needed it.
but I hate debugging build and runtime issues remotely. Thereās always something that remote system is missing that I need
If the remote system is a dev systemā¦ it should never be missing anything. So if somethingās missingā¦ Then thereās already a disconnect. Also, if youāre debugging runtime issues, youād want faster compile time anyway. So not sure why your āmonolithā comment is even relevant. If it takes you 10 compiles to figure the problem out fully, and you end up compiling 5 minutes quicker on the remote system due to it not being a mobile chip in a shit laptop (thatās already setup to run dev anyway). Then youāre saving time to actually do coding. But to you thatās an āinconvenienceā because you need root for some reason.
but my point here is that security should be everyoneās concern, not just a team who locks down your device so you canāt screw the things up.
No. At least not in the sense you present it. Itās not just locking down your device that you canāt screw it up. Itās so that youāre never a single point of failure. Youāre not advocating for āEveryone looking out for the teamā. Youāre advocated that everyone should just cave and cater to your whim, rest of the team be damned. Where your whim is a direct data security risk. This is what the audit body will identify at audit time, and likely an ultimatum will occur for the company when itās identified, fix the problem (lock down the machine to the policy standards or remove your access outright which would likely mean firing you since your job requires access) or certification will not be renewed. And if insurance has to kick in, and itās found that you were āspecialā theyāll very easily deny the whole claim stating that the company was willfully negligent. You are not special enough. Iām not special enough, even as the C-suite officer in charge of it. The policies keep you safe just as much as it keeps the company safe. You follow it, the company posture overall is better. You follow it, and if something goes wrong you can point at policy and say āI followed the rulesā. Root access to a company machine because you think you might one day need to install something on it is a cop out answer, tools that you use donāt change all that often that 2 day wait for the IT team to respond (your scenario) would only happen once in how many days of working for the company? It only takes one sudo command to install something compromised and bringing the device on campus or on the SDN (which you wouldnāt be able to access on your own install anywayā¦ So not going to be able to do work regardless, or connect to dev machines at all)
Edit to add:
Users canāt even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).
Weāre the same! Butā¦ itās Firefoxā¦ If you want to use alternate browsers while in our network, youāre using the VDI which spins up a disposable container of a number of different options. But none of them are persistent. In our case, catering to chrome means potentially using non-standard chrome specific functions which we specifically donāt do. Most of us are pretty anti-google overall in our company anyway. So
but itās nearly impossible to tell the good from the bad when interviewing a company.
This implies the entire build still takes a few minutes on that beefier machine, which is in the ācheck back laterā category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isnāt worth a separate machine.
If my builds took that long, Iād seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.
itās so that youāre never a single point of failure
That belongs at the system architecture level IMO. A dev machine shouldnāt be that interesting to an attacker since a dev only needs:
code and internal docs
test environments
āpersonalā stuff (paystubs, contracts, etc)
VPN config for remote access to test envs
My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors wonāt touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if Iām working on reports, I donāt need the latest simulation code). I also donāt have any access to production, and Iāve even told our devOPs team about things that I was able to access but shouldnāt. I donāt need or even want prod access.
The main defense here is frequent updates, and Iām 100% fine with having an automated system package monitor, and if IT really wants it, I can configure sudo to send an email every time I use it. I tend to run updates weekly, though sometimes Iāll wait 2 weeks if Iām really involved in a project.
if something goes wrong you can point at policy and say āI followed the rulesā
And this, right here, is my problem with a lot of C-suite level IT policy, itās often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, ānot my problem,ā when the attack very likely came through their supply chain. āI was just following ordersā isnāt a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesnāt really matter whose shoulders the blame lands on.
The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and thatās it, and if Iām regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, āyeah, that was meā message to IT was enough).
My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker wonāt get much beyond whatever is cached on my system, and compromising root wouldnāt get much more.
For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when youāre talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.
tools that you use donāt change all that often
Sure, but when I need them, I need them urgently. Maybe thereās a super high-priority bug on production that I need to track down, and waiting 2 days isnāt acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. Thatās pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. āhey, I want to install X to do Y, any concerns?ā).
Most of us are pretty anti-google overall in our company anyway.
Then maybe weād be a better fit than I thought. If, during the interview process, I discovered that IT didnāt use MS or Google for their cloud stuff, I may actually be okay with a locked-down machine, because the IT team is absolutely based. Iād probably ask a lot of follow-up questions, and maybe youād mitigate my concerns.
But when shopping around for a new job, I steer clear of any red flags, and āeven devs use standard IT imagesā and āweāre a MS shopā completely kills my interest. My current company is an MS shop, but they said we have our own infra for our team, and we use Macs specifically to avoid the standard, locked-down IT images.
On my personal machines, I use Firefox, openSUSE (due to openQA, YaST, etc; TW on desktop, Leap on NAS and VPS), and full-disk encryption. Iām considering moving to MicroOS as well, for even better security and ease of maintenance. I expose internal services through a WireGuard tunnel, and each of those services runs in a docker container (planning to switch to podman). I follow cybersecurity news, and Iām usually fully patched at home before weāre patched at work. Cyber security is absolutely something Iām passionate about, and I raise concerns a few times/year, which our OPs team almost always acts on.
All of that said, I absolutely donāt expect the keys to the kingdom, and I actually encourage our OPs team to restrict my access to resources I donāt technically need. However, I do expect admin access on my work machine, because I do sometimes need to get stuff done quickly.
And this, right here, is my problem with a lot of C-suite level IT policy, itās often more about CYA and less about actual security.
Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesnāt have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately itās a required evil.
A casual drive-by attacker wonāt get much beyond whatever is cached on my system, and compromising root wouldnāt get much more.
On a company computer? Thatās presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just whatās cached on your system.
I discovered that IT didnāt use MS or Google for their cloud stuff,
We donāt use google at all if itās at all possible to get away with itā¦ We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etcā¦). We do use MS for email (and by extension other office suite stuff cause itās in the license, teamsā¦ as much as I fucking hate what they do to the GUI/app every other fucking monthā¦ is useful to communicate with other companiesā¦ as we often have to get on calls with API teams from other companies), but thatās it and nextcloud/libreoffice is the actual company storage for ācloudā-like functionsā¦ and thereās backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than Iām willing to deal with their shenanigans as far as O365 mail goes.
Iām considering moving to MicroOS as well, for even better security and ease of maintenance.
Iām pushing for a rewrite out of an archaic 80ās language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructureā¦ and for the low maintenance value of a tiny linux distro. Iām not particularly on the SUSE boatā¦ just because itās never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year Iāve been here Iāve pushed hard to get rid of more and more, and itās been huge as far as stability and security goes for the company overall.
āeven devs use standard IT imagesā
No, itās āeven devs meet SCAā. Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux osās that enforce/prove a minimum security posture for the company overall. I honestly wouldnāt care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you canāt disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that weāve done everything reasonable to keep security above industry standard.
The SCA checks for configuration hardening in most cases. That same Debian example I posted above, hereās a snippet of the checks
Able to talk and communicate with all the company infrastructure?
No, we have hard limits on what people can access. I canāt access prod infra, full stop. I canāt even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs arenāt available).
We have three (main) VPNs:
corporate net - IT administrated internal stuff; donāt need for email and whatnot, but I do need it for our corporate wiki
dev net - test infra, source code, etc
OPs net - prod infra - few people have access (I donāt)
I canāt be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so Iām almost never connected. Joe over in accounting canāt see our test infra, and I canāt see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting canāt compromise R&D, for example.
None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. Thatās annoying, but weāre talking a week or so of productivity loss, and thatās about it. Having IT handle updates may reduce the chances of a successful attack, but it wonāt do much to contain a successful attack.
If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, Iād rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isnāt a proper DMZ (I can access ports my coworkers open if I know their internal IP), and Iād rather they fix that than care about whether I have root access. Thereās almost no reason Iād ever need to connect directly to a peerās machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machineās network if I needed to test some IOT crap on a separate net (and I need root for that).
nextcloud/libreoffice is the actual company storage for ācloudā-like functionsā¦
Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.
an archaic 80ās language (probably why compile times suck for us in general) into Rust
Thatās not going to help the compile times. :)
I donāt use Rust at work (wish I did), but I do use it for personal projects (Iām building a P2P Lemmy alternative), and Iāve been able to keep build times reasonable. Weāll see what happens when SLOC increases, but Iām keeping an eye on projects like Cranelift.
I float more on the arch side of linux personally
Thatās fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweedās openQA seemed to cut that down a bit, which is why I switched, and snapper made rollbacks painless when the odd Nvidia update borked stuff. Iām now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.
For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, Iād probably migrate Ubuntu to MicroOS since we only need a container host anyway. Iām comfortable w/ apt, pacman, and zypper, and Iāve done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).
āeven devs meet SCAā.
SCA is for payment services, no? Iām in the US, and this seems to be an EU thing Iām not very familiar with, but regardless, we donāt touch ecommerce at all, weāre B2B and all payments go through invoices.
The root restriction is simply so that you canāt disable the tools that prove the audit
If youāre worried someone will disable your tools, why would you hire them in the first place? Also, that should be painfully obvious because you wouldnāt get reporting updates, no?
We do auditing, and our devOPs team gets a weekly report from IT about any devices that arenāt updated yet or arenāt reporting. They also do a manual check every quarter or so to verify serials and version numbers and whatnot. Iāve gotten one notice from our local devOPs person, and very few of my team show up as well. The ones that do show up tend to be our UX and Product teams, and honestly, they have more access to interesting info than we devs do (i.e. they have planned features for the next 6 months, we just have the next month or so). And they need far fewer exceptions to the rules, since UX mostly just needs their design software and Product just needs office stuff and a browser.
I obviously canāt speak for all devs, but in general, devs tend to be more interested in applying updates in a timely manner and keeping things secure. In fact, I think all of my devs already used a password manager and MFA before starting, which absolutely isnāt the case for other positions.
None of this has anything to do with root on my machine though.
But it does. If your machine is compromised, and they have root permissions to run whatever they want, it doesnāt matter how segmented everything is, you said yourself you jump between them (though rare).
Security Configuration Assessment
SCA is for payment services, no? Iām in the US, and this seems to be an EU thing Iām not very familiar with, but regardless, we donāt touch ecommerce at all, weāre B2B and all payments go through invoices.
No, itās just a term for a defined check that configurations meet a standard. An SCA can be configured to check on any particular configuration change.
Also, that should be painfully obvious because you wouldnāt get reporting updates, no?
Not necessarily? Hard to tell if something is disabled vs just off.
If youāre worried someone will disable your tools, why would you hire them in the first place?
I donāt hire peopleā¦ especially people in other departments.
But while I found this discussion fun, I have to get back to work at this point. Shit just came up with a vendor we used for our old archaic code that might accelerate a rust-rewriteā¦ and logically related to the conversation I might be in the market for some rust devs.
Sure, but I need MFA to do so. So both my phone and my laptop would need to be compromised to jump between networks, unless weāre talking about a long-lived, opportunistic trojan or something, which smells a lot like a targeted attack.
might accelerate a rust-rewriteā¦ and logically related to the conversation I might be in the market for some rust devs.
Then good luck to you?
But you seemed to have missed the point. The images I share, are an SCA (Security Configuration Assessment)ā¦ Theyāre a āminimum configurationā standard. Not a standard image. Though that SCA does live as standard images in our virtualized environments for certain OSes. Iām sure if we had more physical devices out in the company-land weād need to standardize more for images that get pushed out to themā¦ But we donāt have enough assets out of our hands to warrant that kind of streamline.
I literally go out of my way to get answers for folks who want off the windows boat. Go have a big boy adult conversation with your IT team. Iām linux only at home (to the point where my kids have NEVER used windows[especially these days with schools being chromium only]. And yes, they use arch[insert meme]), Iāve converted a bunch of our infra to linux that was historically windows for this company. If anyone wanted linux, Iād get you something that youāre happy with that met our policies. Your are outright limiting yourself to workplaces that donāt do any work in any auditable/certified field. And that seems very very short-sighted, and a quick way to limit your income in many cases.
But you do you. My companyās dev team is perfectly happy. I would know, since I also do some dev work when time allows and work with them directly, regularly. Hell most of them donāt even do work on their work issued machines at all (to the point that weāve stopped issuing a lot of them at their request) as we have web-based VDI stuff where everything happens directly on our servers. Much easier to compile something on a machine that has scalable processors basically at a whim (nothing like 128 server cores to blast through a compile) all of those images meet our specs as far as policy goes. But if youāre looking to be that uppity annoying user, then I am also glad that you donāt work in my company. With someone like you, would be when we lose our certification(s) during the next audit period or worseā¦ lose consumer data. You know what happens when those things happen? The company dies and you and I both donāt have jobs anymore. Though I suspect that you as the user who didnāt want to work with IT would have a harder time getting hired again (especially in my industry) than I would for fighting to keep the companies assets secureā¦ but that one damn user (and their managers) just went rogue and refused to follow policies and restrictions put in placeā¦
No you donāt. There is no tool that is Mac-only that you would need where there is no alternative. This need is a preference or more commonly referenced as a āwantāā¦ not a need. Especially modern M* macs. If you walked up to me and told me you need somethingā¦ and canāt actually quantify why or how that need supersedes current policy I would also tell you no. An exception to policy needs to outweigh the cost of risk by a significant margin. A good IT team will give you answers that meet your needs and the companyās needs, but companyās needs come first.
So if I gave you a link to a remote VM, and you set it up the way you want. Then I come in after the fact and check it against our SCAā¦ youād score even close to a reasonable score? The fact that your so resistant to working with IT from the get-go proves to me that you would fail to get anywhere close to following ābest practicesā. No single person can keep track and secure systems these days. Itās just not fucking possible with the 0-days that pop out of the blue seemingly every other fucking hour. The company pays me to secure their stuff. Not you. You wasting your time doing that task inefficiently and incorrectly is a waste of company resources as well. āBest practiceā would be the security folks handle the security of the company no?
Same.
I honestly donāt think this issue has anything to with our staff, but our corporate policies. Users canāt even install an alternative browser, which is why our devs only support Chrome (our users are all corporate customers).
My issue has less to do with Windows (unacceptable for other reasons), but with lack of admin access. Our IT team eventually decided to have us install some monitoring software, which we all did while preserving root access on our devices.
I would honestly prefer our corporate laptops (ThinkPads) over Apple laptops, but weāre not allowed to install Linux on them and have root access because corporate wants control (my words, not theirs).
I donāt know your setup, but I probably wouldnāt like that, because it feels like solving the wrong problem. If compile times are a significant issue, you probably need to optimize your architecture because your app is probably a monolithic monster.
I like cloud build servers for deployment, but I hate debugging build and runtime issues remotely. Thereās always something that remote system is missing that I need, and I donāt want to wait a day or two for it to get through the ticket system.
Customer data shouldnāt be on dev machines. Devs shouldnāt even have access to customer data. You could compromise every dev machine in our office and you wouldnāt get any customer data.
The only people with that access are our devOPs team, and they have checks in place to prevent issues. If I want something from prod to debug an issue, I ask devOPs, who gets the request cleared by someone else before complying.
I totally get the reason for security procedure, and I have no issue with that. My issue is that I need to control my operating system. Maybe I need to Wireshark some packets, or create a bridge network connection, or do something else no sane IT professional would expect the average user to need to do, and I really donāt want to deal with submitting a ticket and waiting a couple days every time I need to do something.
Exactly, but thatās what we had to tell IT so we wouldnāt have to use the standard image, which is super locked down and a giant pain when doing anything outside the Microsoft ecosystem. I honestly hate macOS, but if I squint a bit, I can make it almost make it feel like my home Linux system. I wouldāve fought with IT a bit more, but thatās not what my boss ended up doing.
We run our backend on Linux, and our customers exclusively use Windows, so thereās zero reason for us to use macOS (well, except our iOS builds, but we have an outside team that does most of that). Linux would make a ton more sense (with Windows in a VM), but the company doesnāt allow installing āunofficialā operating systems, and I guess my boss didnāt want to deal with the limited selection of Linux laptops. Iām even willing to buy my own machine if that would be allowed (itās not, and I respect that).
If our IT was more flexible, weād probably be running Windows (and I wouldnāt be working there), but we went with macOS. Maybe we couldāve gotten Linux if we had a rockstar heading the dept, but our IT infra is heavy on Windows, so weāre pretty much the only group doing something different (corporate loves our product though, and weāre obsoleting other in-house tools).
No, Iāve just had really bad experiences with IT groups, to the point where I just nope out if something seems like a potential nightmare. If infra is largely Microsoft, the standard issue hardware runs Windows, and the software group that Iām interviewing with doesnāt have special exceptions, I have to assume itās the bog standard āIT groups calls the shotsā environment, and Iāll nope right on out. For me, itās less about the pay and more about being able to actually do my job, and Iāll take a pay cut to not have to deal with a crappy IT dept.
Iām sure there are good IT depts out there (and maybe thatās yours), but itās nearly impossible to tell the good from the bad when interviewing a company. So I avoid anything that smells off.
Yet, Iāve pointed out several security issues in our infra managed by a professional IT team, from zero days that could impact us to woefully outdated infra. Iām not perfect and I donāt believe anyone is, but just being in the IT position doesnāt mean youāre automatically better at keeping up with security patches.
Iām usually the first to update on our team (Iām a lead, so I want to catch incompatibilities before they halt development), and I work closely with our internal IT team to stay updated. In fact, just Friday I asked about some potential concerns, and it turns out we were running into resource limits on devices hosted on Linux OSes that were already out of the security update window. So two issues caught by curiosity about something I saw in the code as it relates to infra I canāt (and shouldnāt) access. I donāt blame our team (theyāre always understaffed IMO), but my point here is that security should be everyoneās concern, not just a team who locks down your device so you canāt screw the things up.
If everything is exactly the same, everything will be compromised at the same time, so some variation (within certain controls) is a good thing IMO. Yet top down standardization makes that implausible.
The company also pays me to write secure, reliable software, and I canāt do that effectively if I canāt install the tools I need.
Yes, IT professionals have their place, and IMO thatās on the infra side, not the end-user machine side. So set up the WiFi to block direct access between machines, segment the network using VLANs to keep resources limited to teams that need them, put a boundary between prod (Ops) and devs to contain problems, etc. But donāt take away my root access. Iām happy to enable a system report to be sent to IT so they can check package versions and open ports and whatnot, but let me configure my own machine.
I get your points. But we simply wouldnāt get along at all. Even though Iād be able to provide every tool you could possibly want in a secure, policy meeting way, and probably long before you actually ever needed it.
If the remote system is a dev systemā¦ it should never be missing anything. So if somethingās missingā¦ Then thereās already a disconnect. Also, if youāre debugging runtime issues, youād want faster compile time anyway. So not sure why your āmonolithā comment is even relevant. If it takes you 10 compiles to figure the problem out fully, and you end up compiling 5 minutes quicker on the remote system due to it not being a mobile chip in a shit laptop (thatās already setup to run dev anyway). Then youāre saving time to actually do coding. But to you thatās an āinconvenienceā because you need root for some reason.
No. At least not in the sense you present it. Itās not just locking down your device that you canāt screw it up. Itās so that youāre never a single point of failure. Youāre not advocating for āEveryone looking out for the teamā. Youāre advocated that everyone should just cave and cater to your whim, rest of the team be damned. Where your whim is a direct data security risk. This is what the audit body will identify at audit time, and likely an ultimatum will occur for the company when itās identified, fix the problem (lock down the machine to the policy standards or remove your access outright which would likely mean firing you since your job requires access) or certification will not be renewed. And if insurance has to kick in, and itās found that you were āspecialā theyāll very easily deny the whole claim stating that the company was willfully negligent. You are not special enough. Iām not special enough, even as the C-suite officer in charge of it. The policies keep you safe just as much as it keeps the company safe. You follow it, the company posture overall is better. You follow it, and if something goes wrong you can point at policy and say āI followed the rulesā. Root access to a company machine because you think you might one day need to install something on it is a cop out answer, tools that you use donāt change all that often that 2 day wait for the IT team to respond (your scenario) would only happen once in how many days of working for the company? It only takes one sudo command to install something compromised and bringing the device on campus or on the SDN (which you wouldnāt be able to access on your own install anywayā¦ So not going to be able to do work regardless, or connect to dev machines at all)
Edit to add:
Weāre the same! Butā¦ itās Firefoxā¦ If you want to use alternate browsers while in our network, youāre using the VDI which spins up a disposable container of a number of different options. But none of them are persistent. In our case, catering to chrome means potentially using non-standard chrome specific functions which we specifically donāt do. Most of us are pretty anti-google overall in our company anyway. So
This is fair enough.
This implies the entire build still takes a few minutes on that beefier machine, which is in the ācheck back laterā category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isnāt worth a separate machine.
If my builds took that long, Iād seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.
That belongs at the system architecture level IMO. A dev machine shouldnāt be that interesting to an attacker since a dev only needs:
My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors wonāt touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if Iām working on reports, I donāt need the latest simulation code). I also donāt have any access to production, and Iāve even told our devOPs team about things that I was able to access but shouldnāt. I donāt need or even want prod access.
The main defense here is frequent updates, and Iām 100% fine with having an automated system package monitor, and if IT really wants it, I can configure
sudo
to send an email every time I use it. I tend to run updates weekly, though sometimes Iāll wait 2 weeks if Iām really involved in a project.And this, right here, is my problem with a lot of C-suite level IT policy, itās often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, ānot my problem,ā when the attack very likely came through their supply chain. āI was just following ordersā isnāt a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesnāt really matter whose shoulders the blame lands on.
The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and thatās it, and if Iām regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, āyeah, that was meā message to IT was enough).
My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker wonāt get much beyond whatever is cached on my system, and compromising root wouldnāt get much more.
For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when youāre talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.
Sure, but when I need them, I need them urgently. Maybe thereās a super high-priority bug on production that I need to track down, and waiting 2 days isnāt acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. Thatās pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. āhey, I want to install X to do Y, any concerns?ā).
Then maybe weād be a better fit than I thought. If, during the interview process, I discovered that IT didnāt use MS or Google for their cloud stuff, I may actually be okay with a locked-down machine, because the IT team is absolutely based. Iād probably ask a lot of follow-up questions, and maybe youād mitigate my concerns.
But when shopping around for a new job, I steer clear of any red flags, and āeven devs use standard IT imagesā and āweāre a MS shopā completely kills my interest. My current company is an MS shop, but they said we have our own infra for our team, and we use Macs specifically to avoid the standard, locked-down IT images.
On my personal machines, I use Firefox, openSUSE (due to openQA, YaST, etc; TW on desktop, Leap on NAS and VPS), and full-disk encryption. Iām considering moving to MicroOS as well, for even better security and ease of maintenance. I expose internal services through a WireGuard tunnel, and each of those services runs in a docker container (planning to switch to podman). I follow cybersecurity news, and Iām usually fully patched at home before weāre patched at work. Cyber security is absolutely something Iām passionate about, and I raise concerns a few times/year, which our OPs team almost always acts on.
All of that said, I absolutely donāt expect the keys to the kingdom, and I actually encourage our OPs team to restrict my access to resources I donāt technically need. However, I do expect admin access on my work machine, because I do sometimes need to get stuff done quickly.
Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesnāt have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately itās a required evil.
On a company computer? Thatās presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just whatās cached on your system.
We donāt use google at all if itās at all possible to get away with itā¦ We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etcā¦). We do use MS for email (and by extension other office suite stuff cause itās in the license, teamsā¦ as much as I fucking hate what they do to the GUI/app every other fucking monthā¦ is useful to communicate with other companiesā¦ as we often have to get on calls with API teams from other companies), but thatās it and nextcloud/libreoffice is the actual company storage for ācloudā-like functionsā¦ and thereās backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than Iām willing to deal with their shenanigans as far as O365 mail goes.
Iām pushing for a rewrite out of an archaic 80ās language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructureā¦ and for the low maintenance value of a tiny linux distro. Iām not particularly on the SUSE boatā¦ just because itās never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year Iāve been here Iāve pushed hard to get rid of more and more, and itās been huge as far as stability and security goes for the company overall.
No, itās āeven devs meet SCAā. Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux osās that enforce/prove a minimum security posture for the company overall. I honestly wouldnāt care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you canāt disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that weāve done everything reasonable to keep security above industry standard.
The SCA checks for configuration hardening in most cases. That same Debian example I posted above, hereās a snippet of the checks
No, we have hard limits on what people can access. I canāt access prod infra, full stop. I canāt even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs arenāt available).
We have three (main) VPNs:
I canāt be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so Iām almost never connected. Joe over in accounting canāt see our test infra, and I canāt see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting canāt compromise R&D, for example.
None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. Thatās annoying, but weāre talking a week or so of productivity loss, and thatās about it. Having IT handle updates may reduce the chances of a successful attack, but it wonāt do much to contain a successful attack.
If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, Iād rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isnāt a proper DMZ (I can access ports my coworkers open if I know their internal IP), and Iād rather they fix that than care about whether I have root access. Thereās almost no reason Iād ever need to connect directly to a peerās machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machineās network if I needed to test some IOT crap on a separate net (and I need root for that).
Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.
Thatās not going to help the compile times. :)
I donāt use Rust at work (wish I did), but I do use it for personal projects (Iām building a P2P Lemmy alternative), and Iāve been able to keep build times reasonable. Weāll see what happens when SLOC increases, but Iām keeping an eye on projects like Cranelift.
Thatās fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweedās openQA seemed to cut that down a bit, which is why I switched, and
snapper
made rollbacks painless when the odd Nvidia update borked stuff. Iām now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, Iād probably migrate Ubuntu to MicroOS since we only need a container host anyway. Iām comfortable w/ apt, pacman, and zypper, and Iāve done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).
SCA is for payment services, no? Iām in the US, and this seems to be an EU thing Iām not very familiar with, but regardless, we donāt touch ecommerce at all, weāre B2B and all payments go through invoices.
If youāre worried someone will disable your tools, why would you hire them in the first place? Also, that should be painfully obvious because you wouldnāt get reporting updates, no?
We do auditing, and our devOPs team gets a weekly report from IT about any devices that arenāt updated yet or arenāt reporting. They also do a manual check every quarter or so to verify serials and version numbers and whatnot. Iāve gotten one notice from our local devOPs person, and very few of my team show up as well. The ones that do show up tend to be our UX and Product teams, and honestly, they have more access to interesting info than we devs do (i.e. they have planned features for the next 6 months, we just have the next month or so). And they need far fewer exceptions to the rules, since UX mostly just needs their design software and Product just needs office stuff and a browser.
I obviously canāt speak for all devs, but in general, devs tend to be more interested in applying updates in a timely manner and keeping things secure. In fact, I think all of my devs already used a password manager and MFA before starting, which absolutely isnāt the case for other positions.
But it does. If your machine is compromised, and they have root permissions to run whatever they want, it doesnāt matter how segmented everything is, you said yourself you jump between them (though rare).
No, itās just a term for a defined check that configurations meet a standard. An SCA can be configured to check on any particular configuration change.
Not necessarily? Hard to tell if something is disabled vs just off.
I donāt hire peopleā¦ especially people in other departments.
But while I found this discussion fun, I have to get back to work at this point. Shit just came up with a vendor we used for our old archaic code that might accelerate a rust-rewriteā¦ and logically related to the conversation I might be in the market for some rust devs.
Sure, but I need MFA to do so. So both my phone and my laptop would need to be compromised to jump between networks, unless weāre talking about a long-lived, opportunistic trojan or something, which smells a lot like a targeted attack.
Sounds fun, and stressful. Good luck!