Kevin Squires is Vice President, Business Technology for the Econo-Rack Group of companies (Konstant, RediRack, Econo-Rack, Technirack.)
As far back as I can remember one of the key questions for any IT department was “What permissions should I give a user on their own computer?”
The question seems innocent but, as many of you can attest, it has long sparked quite a heated debate that pitted policy against effectiveness.
But let me back up a bit and explain what these permissions are and why they are such a controversial topic in many organizations.
When you assign a computer to an employee, IT usually conﬁgures the computer so the employee has all of the applications, printer assignments etc…set up so they can be immediately productive. Part of this set up process deﬁnes what the user is “allowed” to do on the computer.
With admin (short for administrator) rights, the user can do everything on the computer: install new applications, modify settings, reconﬁgure various tools, and ignore IT policies to name a few. With “user speciﬁc” settings, the IT administrators define what the user can and cannot do.
Most common is the scenario where the user is not allowed to modify the computer registry (a small database on the computer where Windows stores conﬁguration information for hardware, applications, etc.) This means the employee can’t install new applications or perform any other task on the computer that requires the registry to be updated. This effectively locks the computer so the employee can only use what has been already installed and configured.
This is where the debate starts. Because the computer is locked, the IT department has control over what is installed on that computer. This enhances security dramatically since programs that are installed without IT’s knowledge could easily interfere with critical business applications.
When this happens, it typically requires an inordinate amount of time by IT support resources to troubleshoot issues just to find out that they were caused by “Joe’s Wine Making Program” or the malware (or virus) that tagged along with the installation of the unauthorized program. It is the drain on IT resources alone that has caused most companies to adopt a policy of removing User Admin rights from the local computers.
However, as if that weren’t compelling enough, according to new research from security software company Avecto, 97 percent of all critical security vulnerabilities reported by Microsoft can be mitigated by removing User Admin rights. I have to say, that made me do a double take. That’s pretty compelling.
Ok, wait. If it’s that compelling, why would you ever grant admin rights to an employee for their computer?
Great question. The answer is simple: Convenience. Users want the ability to install an application when they want to—an application that may help them make a sale or view information on another company database.
Without admin rights, they will not be able to install the program. And to make things worse, it is to close a big sale on the weekend and they can’t easily reach an IT resource. Or, just as common, senior management wants admin rights so they can install needed software at their convenience. What do you say then?
The only advice I can give is to create a robust policy around admin rights that clearly identifies the risk, and have it approved by the CEO if possible. This will give you something to point to when cornered at the coffee machine.
I would also highly recommend that you (and your team) work with employees and be extremely quick to test and install legitimate applications that aren’t initially part of the installed suite of programs. This will ensure your users always have the tools they need while mitigating the risks that having an “open” computer introduces.