Building the Right Environment to Support AI, Machine Learning and Deep Learning
Standardized online tests are bunk, and the companies that use multiple-choice tests as a sole determinant when hiring people are likely staffed with very lazy people. I don't say this just because I am a poor test taker, and I don't say this just because my 40-year-old brain is no longer nimble and good at memorizing facts. (However, it is full of too many useless facts already.) Standardized online tests as a sole means of screening people is problematic and is no better at determining potential for success than grades in college. (Many a C- student millionaires and leaders will agree with me hereBill Gates, Michael Dell, George Bush, and John Kerry, included.)
A True Story
A local peer of mine recently wrote to lament a college student he hired as a developer. From his letter, he clearly was impressed by her academic record, the fact that she had earned a scholarship to an excellent university, and that she seemed articulate and bright. By all accounts, she was a superstar waiting to shine. This employer also mentioned that his technical abilityread, his ability to screen her real skillswas somewhat wanting. What was the result?
This academically successful person is failing miserably at her job. Her peers are saying she's in over her head. Her boss is seeking advice from several corners, including researching her precise academic curriculum in detail, and doesn't know how to proceed. Yes, maybe the boss needs a refresher on hiring strategies, but that's not why I wrote this.
This young person should have been a star. What happened? The answer is that grades aren't everything. Grades reflect an ability to take tests and remember facts, but many classes aren't based on critical thinking and problem solvingthey are based on timed fact recollection. This suggests that test scores may be an element for evaluating potential but not the only oneand probably not the most important one. Actual accomplishments are the best means of evaluating potential.
Another True Story
I generally refuse to take online skills tests. One reason is that I don't test well. (I am not sure whether I ever have.) Another reason is that more than one of these online testing companies has asked me to help write their tests for money and, because I don't agree with them as a good screening device in general, I declined to accept their offers.
Recently, however, an agency asked me to take a C# test for a prospective customer. Being a little curious, I decided to take the test. This particular test was 40 multiple-choice questions; the time for each question was three minutes; and you could use online materials and books in the time allotted. I took the test using only extemporaneous recall. Guess what? I scored in the 53 percentile. Out of 7,000 tests takers, I scored as well as about half.
Granted, had I employed Google, Visual Studio's help documentation, and the compiler, I probably would have scored somewhat higher, but what was really tested?
One way to look at the results is that I got half the questions rightabout 20 out of 40so I know 50 percent of the facts. Say there are 10 million facts, then according to the test results, 3,500 people and I know 5 million of them. That's pretty good. But what good are facts?
Here is another problem. In practice, no time limits exist for solving problems in softwarefor the most part. If one spends an hour or two finding a fact or working out a problem, no one cares. If one is stuck after an hour and reaches out to online columns like mine or e-mails friends who may know, the problem ultimately is solved. Who really cares what the compiler switch is for range checking when a 10-second search of the help documentation will provide the answer?
Building software is about problem decomposition, solution composition, tenacity, problem solving, good tools, organization, and having the money and time to finish. Facts are the least of anyone's problems. Seventy-five percent of all projects in our industry don't fail because of an ignorance of facts; they fail because of poor planning, flagging budgets, or lousy analysis, designs, and specifications. On a team of 5–40 people, the team members can rattle enough facts off the tops of their heads to fill an encyclopedia, yet projects routinely fail.