I'm working with Industrial Logic's newest Python album on microtesting (not yet released) and was lucky enough to get to test out the automated critique.
So the way this eLearning works is that you download a problem to work on, and you're given tasks to perform. In this case, it's all about writing microtests for some simple python code. When you finish, you upload your results and an automated critique system digs through the code and gives you ratings and pointers. This is rather like having a mentor sitting with you, reviewing the code.
I tell people that the eLearning here is something like they've never seen, but people think I'm marketing or something. Today I have a story for you:
Yesterday I made a mistake and the automated critique busted me. The mistake was one where I constructed an object incorrectly and invalidated the premise of the test, yet I did this in such a way that the test passed for the wrong reason. I was feeling so sure I'd done it correctly that I thought the critique was wrong. I pasted a copy of the test into my error report before I realized I was in the wrong.
The coolness didn't stop there, though. Through all of this, my tests looked nothing like the tests that were presented in the course material. The eLearning system allowed for my stylistic differences only flagged the actual error in my code.
Slightly humbled, I corrected my test and uploaded it again for analysis. This time I was 100%. The online course gave me advice and allowed me to try again.
When it comes to developing skills, anyone in the business will tell you that guided experience is the best teacher, but I've never seen a system guide my skill development like this one does. Today I found out there's even more guiding intelligence on the way. Watch this space for details.
No comments:
Post a Comment