In the course of a typical job interview, questions about your previous employer will come up. If your old job caused you immense frustration, you might find yourself tempted to use those questions as an excuse to rant about the horribleness of your former boss, working environment, colleagues, tasks, and so on.
Succumbing to such temptation would prove a mistake. A prospective employer doesn’t want to hear candidates talk about everything wrong with their old jobs. At the very least, spewing negativity will make you seem like somebody who has trouble letting go of the past and moving on (a vital career skill); at worst, the prospective employer will start to wonder whether you’d spread that pessimistic attitude to their own office.
Even if your previous job was awful, it pays to stay neutral when asked about it in a professional context. Describe your time there as a “learning opportunity” or “filled with challenges.” If you do end up talking about a bad situation, make sure to conclude the story on an upbeat note—how did you finish things in a positive and constructive way?
- How to Answer Open-Ended Interview Questions
- How to Avoid Making the Worst Possible Interview Mistake
- 3 Tips to Prepare for a Phone Interview
“Managers are looking for people who have thought critically about their tools, instead of accepting them blindly just because they are in style,” said Mark Lutz, a Florida-based trainer and author of several books on Python.
Answering the “how” questions is always important during an interview, Lutz explained, but answering the “why” questions suggests a dedication to improve and grow—not just earn a paycheck.
With that in mind, Lutz provided some of the common replies to interview questions for Python newbies, and the answers that he’d rather hear instead.
How do Python 2.x and 3.x differ, and why should you care?
- What Most People Say: “I don’t really understand the differences between the two versions, but I noticed that the print statement becomes a built-in function in 3.x.”
- What You Should Say: “The more significant 3.x changes include: differing and more pervasive Unicode support; mandatory usage of new-style classes; deeper integration of iterables and functional programming tools; and change, replacement, and deletion of many built-in tools (not just print). Of these, the 3.x Unicode model may have the largest impact, as it touches on strings, files, and a host of application-level interfaces in the standard library and third party domains. The new-style class model elevates topics such as the MRO, descriptors, and metaclasses from optional topics to required reading. And the more widespread role of iterables demands more careful use of tools like zip() results and dictionary key lists, for display, multiple traversals and object-like lists.”
- Why You Should Say It: The Python world still uses both lines, and the vast body of existing 2.x code will probably be a permanent part of the Python ecosystem. Therefore, you need to understand both versions to maintain or port old code, or write new code that works on either line agnostically. While the first answer is correct, it reflects a superficial understanding of a major pragmatic dilemma Python programmers face today.
These three statements run in series: A=, B=A, A +=. Does the third statement change B?
- What Most People Say: “No, only A changes. Wait, I think B changes because it prints as  after the last statement runs, not .”
- What You Should Say: “B doesn’t change and continues to reference the same object it did after the second statement. Rather, the object that B (and A) reference differs at the end because it has been changed in-place through variable A.”
- Why You Should Say It: The better answer draws a distinction between variables that reference objects and object-like lists, which is a central concept in Python. In larger programs, shared objects are often deliberately changed in-place in potentially far-flung bits of code, to update long-lived state. If you don’t understand this model, it can lead to fairly painful debugging sessions when it occurs unexpectedly. If you do, it shows deeper Python knowledge.
What’s the point of using classes and OOP in Python?
- What Most People Say: “Because of polymorphism?”
- What You Should Say: “OOP and classes become indispensable as programs grow larger, primarily because they let you use and customize existing code, which reduces development time. OOP also provides code structure and avoids the pitfalls of global data that vex much function-based code. However, developers need to consider other issues. For instance, it’s OK to code functions, modules, and even top-level script code as long as you don’t expect them to be flexible enough to be reused in other programs. Top-level script code is always a one-program effort, because it runs immediately and has no container object. Functions can be imported and reused to some extent, but they don’t directly support growth by extension and must rely on arguments and single-copy global data for recording state information.”
- Why You Should Say It: Since OOP and classes represent a fundamental design choice, it’s crucial to understand when they should and shouldn’t be used. Classes provide a hierarchy that fosters extension in ways that functions and other tools cannot. An interviewee who doesn’t express this probably hasn’t moved beyond the trivial programs phase in the learning cycle, Lutz said.
What do you think about Python’s “batteries included” paradigm?
- What Most People Say: “I like it. Why spend time reinventing the wheel when wheels are available for free?”
- What You Should Say: “Batteries included is great, under certain circumstances. The quality of third-party code can be iffy and the resulting product may not support your company’s needs. In truth, cut-and-paste code could become your code base’s weakest link. You need to carefully review the code and consider the consequences to make prudent case by case decisions.”
- Why You Should Say It: Blindly parroting the mantra that code reuse always beats writing new code could be a sign of a shallow perspective, which may produce code-maintenance nightmares down the road.
Why would you use the super() call and why would you not?
- What Most People Say: “super() is awesome, because it works just like it does in Java; you should use it whenever you can, instead of calling methods by class name.”
- What You Should Say: “super() has two primary roles: In single-inheritance class trees, super() can indeed be used to invoke a method in a superclass generically. This role is essentially as it is in Java, at least for trees that will never grow to include multiple inheritance. In multiple-inheritance class trees, super() can also be used for cooperative method-call dispatch, which routes a method call to each class just once in conforming trees. This role is more unique to Python, and works by always selecting a next class on the MRO following the caller that has the requested attribute. Unfortunately, super()’s second role may have a massive downside: Its automatic method routing makes for a wildly implicit code invocation model, one that can obscure a program’s meaning, create deep class coupling, hinder customization and complicate debugging.”
- Why You Should Say It: An interviewee who describes any of super()’s downsides gets Lutz’s vote. On the other hand, a person who only lauds the Java-like role in single-inheritance trees would strike him as someone who will probably code Java in Python. Worse, the candidate may pepper a code base with complex and obscure tools in some misguided effort to prove personal prowess instead of practicing sound software engineering.
Have you ever written a perfect program?
- What Most People Say: “Yes! And I’ll write even more if I’m hired!”
- What You Should Say: “Of course not! Even though I strive for perfection, I don’t think anyone has ever written perfect software. That’s why I test repeatedly.”
- Why You Should Say It: Perfection doesn’t happen in code and responding otherwise might suggest a towering ego that could sink an entire project.
- A Look at 5 Free Python Editors
- Python Snakes Up List of Fastest-Growing Skills
- Are Python and Objective-C Worth Learning?
Bookings in Q3 were $1.1 billion – an all-time high for a third quarter and an increase of 19 percent compared with $928 million for the same period in 2013.
Health care organizations are turning to cloud computing to perform all types of functions. GigaOM recently reported DNAnexus, a cloud startup, is using the technology to turn DNA sequences into valuable information. For example, Regeneron Genetics Center is leveraging the solution to for drug research.
The news provider explained Regeneron uses DNAnexus' technology to analyze 1,000 exomes, which make up 1 percent of the human genome most relevant to health data, to compare these genes to potential health issues. DNAnexus streams these findings to cloud environments and translates the content in only a week, opposed to the six months it would take with an on-site data center.
Richard Daly, CEO of DNAnexus, told the news source the process reduces both the time and funding needed to execute the research.
"They have achieved a scale where they can look for what would have been relatively rare occurrences in the data," DNAnexus Chief Cloud Officer Omar Serang said, GigaOM reported. "You need really large data sets to look for the next advancements in medicine."
Cloud-based health market set for healthy growth
Cloud computing is poised to take off in the health care field through 2017. A MarketsandMarkets report projected the industry to expand at a compound annual growth rate of 20.5 percent between 2012 and 2017.
The cloud's data accessibility is one function expected to fuel the health care market's use of the service. MarketsandMarkets explained hospitals and clinics need quicker access to data to collaborate across the entire organization and geographical locations. Achieving this goal enables caregivers to enhance patient treatments by being able to view patient records without delay.
Since the cloud health care market is poised for such rapid growth, it means many organizations will be first-time adopters. Choosing a cloud model and service provider may seem easy on the surface, but the number of options in both categories can be overwhelming for newcomers.
A migration tool such as RISC Networks CloudScape is the perfect way to make informed decisions about cloud computing. The solution enables adopters to view how cloud products will benefit their operations beforehand. Establishing a performance baseline before a cloud environment launches allows firms to identify any issues that may harm efficiency or result in security problems and address them appropriately.
Health organizations that take a measured approach to cloud computing will be happy they were so diligent when the service lives up to its true potential in the workplace.
The post Health care market using cloud computing for innovative research appeared first on RISC Networks.
Charles Jaffe, MD, CEO of standards organization HL7, came away from the joint meeting of the federal Health IT Policy and Health IT Standards committees earlier this week, thinking that the industry could move faster on interoperability. And HL7 has just the thing to change the game.
"I don't try to denigrate the success; I try to celebrate it," he told Healthcare IT News.
The College of Healthcare Information Management Executives, which represents more than 1,400 CIOs, and Health Level Seven International are working together to promote a standardized approach for exchanging healthcare information and to highlight the importance of developing and adopting standards to achieve interoperability.
Hot on the heels of last month’s debut of two new iPhones, Apple plans on unveiling something new at an Oct. 16 press conference.
In typical fashion, Apple is keeping the details of its announcements under wraps, but the general consensus is that the company will roll out at least one new iPad, and possibly announce that Mac OS X “Yosemite” is available for download.
Tech publications such as The Verge seem to uniformly believe that the next iPad is the iPad Air 2, and will include an upgraded processor (the A8X) along with the Touch ID fingerprint sensor already available in later-generation iPhones. This new iPad could boast an anti-reflective coating, which would render content easier to read in bright sunlight; just like the newer iPhones, it might also come with the option of a gold casing in addition to gray and silver.
Assumptions about processor upgrades and color options seem pretty safe, as Apple hasn’t really altered the iPad’s design since the tablet’s introduction, aside from making the body thinner and lighter, and the screen higher-resolution.
The rumor mill seems less clear on whether Apple will introduce a next-generation iPad mini to go along with a new full-size iPad, but considering how it’s been a year since the last iPad mini made its debut, such a move seems logical. (As any Apple watcher knows, the company likes to stick to yearly upgrades for many of its core products.) A new iPad mini would also presumably feature a more powerful processor and other upgraded internals.
Speaking of yearly upgrades, it’s a near-certainty that Apple will use its event to announce the release date of Mac OS X “Yosemite,” the latest upgrade of its desktop operating system. New features include the ability to take iPhone calls on a Mac, “hand off” documents between desktops and iOS devices, and a streamlined version of Apple’s Safari browser.
But will Apple also debut new MacBooks, or even a new iPod? The answer to that will likely remain murky until Oct. 16.
- Is Apple’s Swift Worth Your Development Time?
- The ‘New’ Apple Apologizes Early and Often
- Does Apple’s App Store Need a Radical Revamp?
Over the last five years, demand for Ruby on Rails skills has quadrupled and is proving to be a lucrative feather in the cap of developers, according to data from PayScale, an online salary, benefits and compensation information company. The relative ratio of workers who report it as a skill critical to their role in the […]