- Normalization produces a lot of tables. A lot of tables translate into a lot of joins. A lot of joins is a red-flag for complexity. When queries frequently have three or more joins, the schema is probably overly complex.
- Relational theorists discourage use of null foreign keys. The only way to accomplish this goal is to introduce a link table for the one to many relationships in the model. This introduces two problems, first, the extra table adds an additional (and unnecessary join). And second, it is harder to determine if a row in one table is related to a row in the other.
- Using data to determine state (or status) introduces complexity. While not necessarily a tenant of relational models, some designers prefer to compute the state of an item on-the-fly based on data in the tables. This is easy enough when all the data needed to determine the state are stored on the same table row. Unfortunately it is frequently necessary to scan additional tables on multiple rows to calculate the status.
Lessons learned from twentyfive years building software, recruiting teams, and managing growing firms.
Thursday, May 03, 2012
Time to Kill the Relational Model
Wednesday, December 14, 2011
Microsoft Office 365 Cloud-Based Productivity Service Now Helps Customers Comply with HIPAA Privacy and Security Standards - Microsoft in Health - Site Home - MSDN Blogs
With reimbursements falling and medical loss ratio minimums rising, hospitals, physicians, and health plans are under unprecedented pressure to drive down operating costs while still improving the quality and safety of patient care. The economic advantages of cloud-based productivity solutions to drive down operational costs and complexity are well understood, but for most health organizations, HIPAA security and privacy concerns have been a showstopping barrier to realizing the full anywhere, anytime productivity potential of cloud-based technologies.
That is, until now. Today ... Microsoft is helping remove that barrier by embedding privacy and security capabilities in Office 365, our next-generation cloud productivity service. This means that Office 365 is now a cloud-based platform that complies with leading information privacy and security standards for customers operating in the United States and European Union. As part of its contractual commitment to customers, Microsoft will now sign business associate agreements under the U.S.-mandated Health Insurance Portability and Accountability Act (HIPAA).
'via Blog this'
Thursday, October 27, 2011
Why Digital Talent Doesn’t Want To Work At Your Company | Fast Company
Why Digital Talent Doesn’t Want To Work At Your Company | Fast Company:
- Every element of their work will be pored over by multiple layers of bureaucracy. Even if that’s how the rest of the company operates, it can’t spill into the digital department. In a technology environment, new products and businesses spring up daily and a new endeavor can go from conception to launch in a matter of months. Reining in the momentum will be read as inaction and a clear signal the company isn’t willing to grasp the new way of the world.
- Mediocre is good enough. While clocking out at 5 p.m. is attractive to some, it will discourage digital talent. They want to be expected to do something great. They want to be pushed. They care about their work. Their leadership, and those they rely on to get things done, must match their appetite for success.
- Trial and error is condemned. The freedom to try out new ideas allows employees to take initiative, make decisions, and learn from their mistakes. It also demonstrates an attractive and inspiring entrepreneurial spirit.
- Your company is structured so it takes a lifetime to get to the top, and as such there are no digital experts in company-wide leadership positions.Digital talent--often in their 20s and 30s--need to see a clear path for uninhibited career development that’s based on merit, not years spent, and that’s beyond the confines of the digital department. If they don’t, they won’t see a reason to stay with the company in the long term.
- Your offices are cold, impersonal and downright stodgy. It may sound like it conflicts with the “you don’t need to be in Silicon Valley point,” but appreciate the nuance. A traditional office layout is designed to communicate power among certain individuals and barriers between departments. This does not support the collaborative ethos which is intrinsic to the web. Companies should do everything possible to provide the digital team friendlier, open office space. A location in a hip, young neighborhood (which surely exists in every mid- to large-sized city) is also a big plus.
'via Blog this'
Wednesday, February 18, 2009
Don’t think of BI as an all-in-one solution
I recently had a short conversation on Business Intelligence with one of my peers. I tried to explain the premise that a Business Intelligence application in our industry (Health Care) should not be a one-size-fits all solution. Instead the technology should be tailored to the types of questions that it will need to answer most frequently.
When he claimed "of course it has to be able to answer any question, otherwise we could just write queries," I realized I failed to make my point. He is not the first person I've met to have this opinion. In fact, the opinion is pretty pervasive among my peers; and it is wrong.
Our Business Intelligence solution is a textbook case highlighting this point. Its' saga is a story for another day, but in our attempt to make it very flexible we failed to make it strong. That is, there's no limit to the reports you can create, but it's not great at answering any particular question.
I liken this to the difference between a hammer and a Swiss Army Knife. A hammer is great at driving nails, better than any other tool for this task. It also happens to be pretty good at removing nails too. A Swiss Army knife can do a lot of things from clipping nails to opening cans. But it's not particularly good at any of them.
The real beauty of a hammer, though, is the other things that it can do pretty well. In fact, if put to the test, it isn't hard to come up with at least as many tasks as can be done with a hammer as with the Swiss Army Knife. It can be a door-stop, a paper-weight, a meat tenderizer, a garden shovel, and more. Sure, it's lousy at tightening screws, but it can really drive nails.
Don't get me wrong. There's a place for firms that build all-in-one software. In fact, my former employer Information Builders is one such firm. My current BI solution is built on a MicroStrategy platform, another Swiss Army Knife vendor. Vertical solutions, like our health care application, need to be targeted; they need to be really good at particular questions.
Unfortunately, many designers of Business Intelligence solutions try to make Swiss Army Knives when they really need hammers. And given a good hammer and an innovative user, there will soon be many other tasks suitable for the tool.
Tuesday, February 10, 2009
Somebody agrees with my Chrome prediction
Tuesday, July 08, 2008
eWeek looks at Computing on the Cloud
My paper copy of eWeek is generally fodder for the circular file. In fact, the magazine rarely makes it as far as my office. Instead it stays in our reception area on a coffee table along with other unread magazines. This week, though, I picked up a couple of editions to browse while taking lunch. In the process I discovered some interesting cloud computing technologies.
The first is old news for leading edge developers, but new to me. The June 30 edition ran an analysis on the Google App Engine. The Google App Engine competes with Amazon Web Services in scope and intent. Although Google currently only supports the Python language, betting an application on Google infrastructure seems pretty safe.
In a similar vein, GigaSpaces offers an application server that sits on Amazon's EC2 cloud computing solution. The GigaSpaces application server provides a middleware layer between Java or .Net applications and the Amazon Web Services backend.
A third product, Jungle Disk, has a different goal, but uses computing on the cloud none-the-less. Jungle Disk is a backup and storage solution that works with Amazon's S3 Web Service. I have not tried Jungle Disk, but eWeek offers a fairly thorough analysis that puts the product on my to-do list.
Of course the interesting common thread with all these applications is computing on the cloud. GigaSpaces and Jungle Disk take advantage of Amazon's early entry into the utility computing space. Google, however, does everything well, and should have little trouble catching up. Old timers like myself remember the days of time-sharing on mainframes. Cloud computing proves what's old is new again with one important difference; now utility computing is affordable.
Thursday, July 03, 2008
Once You’re Lucky
I often pick books based on reviews from Wired Magazine. This was true when I picked up Once You're Lucky, Twice You're Good. In Mark Horowitz's review, he claimed that "Sarah Lacy…hangs with [entrepreneurs] them, gains their trust, and gets the goods. No other recent chronicle delivers such intimate, behind-the-scene glimpses into Silicon Valley startup life." It's an entertaining read reminiscent of Accidental Empires.
My issue with the current crop of valley startups has little to do with Sarah Lacy's book, although her tales reinforce my opinion. The problem is very few of the companies actually provide a site that is useful. Even Facebook, the current reigning king of Web 2.0, doesn't help its members solve problems or get things done. Of course my days of finding a good off-campus kegger with lots of girls are long past. As Twitter, Slide, or Ning they seem even less useful.
What's interesting about these sites, though, and the stories Sarah wraps around them, is the underlying technology. Maybe an old-timer like myself doesn't see the benefit of social networking. I do see where collaboration and go anywhere sites can be very useful. I'm expecting a new wave of web startups that take Web 2.0 into truly commercially viable areas.
Friday, June 20, 2008
Thoughts on some Web 2.0 Sites
The expression Web 2.0 has been with us so long that we can consider it a Tired term. Founded on the principles of collaboration, and built on highly interactive technology (read AJAX), Web 2.0 sites represent the post bubble rebirth of an industry. Some of these sites offer fairly valuable services and some of them are nonsense. Here are my thoughts…
Facebook is seen as the company with the most potential for long term success. I've been using the site almost daily for over a month and I can honestly say that "I don't get it". Sure, I'm a bit of a dinosaur, having attended college (both degrees) long before social networking came to the web. But in all my use of the site over the last several weeks, it hasn't helped me accomplish anything. Facebook has been pretty successful at positioning itself as a platform for micro-applications. Here again, I don't get it. I've tried many of the applications and always end up with the same thought…"so what?". I read somewhere that everyone eventually has an "ah-ha" moment with social networking; I'm still waiting for mine with Facebook.
On the contrary I saw the benefit to LinkedIn the day I started using it nearly five years ago. This site has helped me find work, consultants, and business leads. I've heard it called social networking for business people, but there's really very little that social about it. The site generally prevents people from contacting strangers, at least without an introduction. Ironically, you can pay money to override this fundamental aspect of the site. Paying members have access to InMail messages and can reach out to people directly. Anyone in a career should be active on LinkedIn.
Plaxo is an odd hybrid of Facebook and LinkedIn. I actually avoided using Plaxo for sometime, opting instead for GoodContacts, but when GoodContacts looked like they weren't going to make it, I switched loyalties. Plaxo suffers from an identity crisis. It started as a convenient way of managing business and personal contacts online. I use it as my main address book, and sync Outlook, my Blackberry, and other sites to Plaxo. Somewhere along the way, though, the site morphed into Plaxo Pulse. The new site is a clone of Facebook, right down to the page layout and color scheme. I still use it to keep my address list, but I don't think this company will survive.
Geni is a cute social networking site for families. I checked it out after reading Once You're Lucky. This site is actually pretty good. It is easy to use and has all the capabilities needed to stay in touch with your extended family. Nonetheless I feel this site is doomed. I make this judgment simply because no one seems to know anything about it. My family is so tired of receiving web site invitations from me that they've all but ignored the Geni invites. And without active participation from my family, the site loses its' usefulness. Add in the fact that there is no subscription or advertising; I have no clue how the site expects to earn revenue.
Twitter is dumbest idea I've ever seen. Still, it's pretty addictive. I checked it out because I am starting a project that requires a similar SMS interface. My project, however, will be useful, where Twitter simply generates noise.
I have a love/hate relationship with Flickr. I use the site a lot by storing all my family's pictures there. I uploaded so many pictures that I had to buy a subscription. Flickr was a leader in establishing sharing, but now its UI seems dated. I find it difficult to use, or at least difficult to learn how to use. And I am disappointed in the site's "badge" ability; that led me to seek alternatives including Slide.
Krugle on the other hand is one of the best and most useful sites I've found. Of course you have to be a programmer to appreciate the site's benefits, but for those of us developing software for a living the site is amazing. Forget Google Code Search; Krugle is the place to go for snippets of code and projects in the public domain.
YouTube is silly and innocent fun. If you're reading this far into this post, then you already know all about YouTube.
Some people claim that Google Docs is meant to replace Microsoft Office. Google, of course, denies such claims by saying that Google Docs are meant to augment productivity suites. Frankly I don't care about the pending Microsoft vs Google wars. I like Office and I like Google Docs. I've read that Google Docs is not as feature rich as Office, and certainly the Google toolbar has nowhere near the number of buttons as the Office Ribbon. That said, I've never looked for a feature in Google Docs that it didn't have. I guess that says something for the feature bloat in Office.
Everybody uses maps online. MapQuest was amazing when it first came out. Google, however, really lifted the bar when it introduced Google Maps. Now all the map providers have full screen maps that pan as you drag them. They all have satellite pictures and zoom. Google has Street View and Live and Birdseye View. Of Google, Live, Yahoo! and MapQuest, I like Live best. But they're all good.
I checked out Slide because I wanted a cool way to show pictures on my web site. Slide has some cool features, but I was a little disappointed. I can't imagine building an entire business around slide shows so I don't look for this site to last long.
Popfly is Microsoft's site to demonstrate how cool their Silverlight technology is. It's kind of cool for developers building mashups and who don't mind using Microsoft technology. That's probably a pretty small group, but since I am technology agnostic, I use it and think it's a pretty cool site.
Delicious on the other hand is about as uncool as you can get. I think the site is ugly and serves little purpose. On the contrary, Trailfire, is of similar ilk but is amazing friendly and helpful. What a difference a thoughtful UI can make. Unfortunately, Delicious is the better bet for longevity in this space as they have the backing of Yahoo! and some web 2.0 brand recognition.
Wednesday, April 16, 2008
Cash is King
There is a common notion that most start-ups fail for one of two reasons; either they are under capitalized or do not sell aggressively. In both these cases, the firm fails to raise the appropriate amount of cash needed to sustain operations. Of course all failed businesses run out of money, but some never give themselves a chance.
A firm that never gave itself was chance is Elite Technology Partners. The company was born from the technical brain trust of Blackwood Trading, and quickly conceived a product and business model based on their experience from the other firm. Their product was greatly inspired by Blackwood, but targeted a specific strategy and different distribution channel.
Elite's founders had learned from their experience at Blackwood. Certainly, they put to use the intellectual capital gained at Blackwood. They also learned from Blackwood's failed effort to market themselves successfully. In addition, the founders recognized that Blackwood's operational model was very expensive.
However, Elite underestimated the effort required to develop a product and bring it to market. Very quickly the firm fell into a chicken and egg situation. In Elite's case, they couldn't sell product because they did not have the capital to complete it. And they could not raise cash through sales because their product was not finished. Add to their situation the economic environment during 2002, where investment capital had effectively dried up. Venture capitalists were only looking at firms generating cash through sales.
In Elite's case, the lack of cash led to a series of strategies changes that sealed the fate of the company. First, a shortage of cash caused many of their best engineers to secure positions at other employers. Elite sought an investment arrangement through key prospects that would enable them to bring their product to market. But partnerships with a large investment come with restrictions. Demands of exclusive use and ownership of intellectual property made partnership deals impractical.
To raise cash, Elite sold its' talent in consulting arrangements. The consulting business was profitable and brought in enough cash to keep the operating. Unfortunately the opportunity cost of consulting was a halt to further development of Elite's product. In the end, Elite failed to complete its' product; Elite failed to complete its' product because it failed to raise the capital necessary to build the necessary technology.
I am working with an entrepreneur who is in a similar position. He is following a conscious strategy of delaying raising despite having an established relationship with investment bankers. The entrepreneur is betting that signed contracts will make the firm more attractive to investors. This may be true, but having no cash in his firm, he will have a difficult time meeting any operational commitments.
My observations of Elite Technology Partners leads me to believe that my friend has a risky strategy. Because he refuses to seek capital, he will not have a fully functional team in place when he signs his first contract. When the contract is signed he will need to complete his technology, hire staff, and seek capital within a very short timeframe. In the three to six months needed to receive private equity cash his venture could fail.
Elite Technology Partners failed because they did not raise cash for their operations. Successful entrepreneurs beg, borrow, or steal (Ok, maybe not steal) enough to give their companies inertia. A firm with inertia will attract further investment, or generate cash organically.
Wednesday, February 13, 2008
Data Warehousing Dilemma
Kimball's Data Warehouse Toolkit was an ephifany and inspiration. Suddenly before us was the solution to performance woes. His solution? A star schema, where with the warehouse measures are retained in a single table linked with related descriptive data. All the descriptive data (dimensions) were related through the single fact table containing the measures.
Our project started simple enough. Using sample report templates from our Product Management team, we determined a grain for the warehouse and computed the appropriate measures. The grain, by the way, is the lowest level of detail needed to answer the questions asked of the data.
It soon became apparent that there was a flaw in our design. Not the design of the warehouse, per se, but the design of the system. The reports designed by our Product Management team were at the level of the grain. Running them would produce thousands of pages of detail. Nowhere were we taking advantage of the warehouse's dimensions to drill into these reports. Seeing the flaw, we tasked our Product Managers with spec'ing the entry points to their reports.
What was returned to us was a disaster. Instead of taking advantage of the warehouse or even the capabilities of the Business Intelligence technology, the PMs designed more reports. These new reports were summaries with drill paths to the detail provided earlier. They also contained an entirely new set of metrics, all calculated at a different grain.
The problem of the different grain was exocerbated by many of the new metrics. These metrics were computed with division of aggregated counts. The counts, however, were not on the fact table, instead they were distinct counts of dimension values. There is the dilemma, we needed figures that could not be pre-computed into our cubes. The metrics were computed "on-the-fly" and resulted in tremendous performance problems.
I believe the solution is simple and obvious. We need additional fact tables a different grains. The purists among my team didn't see it so clearly (they will by time we're finished). The problem is Kimble's treatse on data warehouses discourages multiple fact tables in the database schema.
Kimble oversimplifyies warehouses with the star schema. Any complex set of data will have measures that can not be summarized into a single fact table. In truth, though, multiple fact tables will be an integral part of any practical solution based on a data warehouse.
Wednesday, January 23, 2008
Are we dumbing down programming?
It's true that abstraction is the most intangible concept of OOP. During my phone screens of candidates, though, I find myself wishing I had tipped them off. I will be asking about OOP. Go look up "object oriented programming" on wikipedia.
I generally blame our universities for this failure. OOP is largely conceptual, and should be introduced and reinforced by any Computer Science department worth their accreditation. Once in the workplace, software engineers rarely get the proper mentoring on solid coding habits.
We really can't lay blame entirely on universities and trade schools. No, much of the problem lies with the technologies we use to build applications. High on the list of offenders are Visual Basic, Visual Studio, Java, and .Net. Throw in HTML, XHTML, XML, and all the mark-up language derivatives. Then add in any of the web development tools like Cold Fusion and Flash. Of course the scripting languages, JavaScript, VBScript, and Perl virtually prevent solid coding practices.
My obsession with OOP stems from a very specific business need. I have to support ten software products with a very modest staff. The most basic way to accomplish this is by reducing the amount of code. An obvious way to reduce code is reusing code. Unfortunately I inherited a situation based on copy-and-paste code. After three years of fighting copy-and-paste habits, we still support multiple versions of code that perform the same task.
Many developers confuse using objects with OOP. Dropping a control onto a form does not constitute object oriented programming. In fact, there will be nothing reusable in the result. In addition, the automatically generated code written by the action of dropping the control is almost certainly unreadable. But then, many developers today don't even realize code is being generated.
So our tools have dumbed down programming skills. Especially for those developers who rely on the designers and tools built into their development environments (IDE). For me, I'd like to find an engineer or two who would love to create the next Visual Studio, instead of dragging controls from a toolbar like some pre-programmed automaton.
Thursday, January 10, 2008
How much detail in requirement?
Requirements are written in varying detail. During my tenure with DoubleClick in 2000, we had initiated a project for the advertiser side of our industry. In my ten months with the firm, the document was never finished. The last time I saw it, it exceeded 300 pages. We appointed a steering committee to oversee the project and they would debate the document endlessly. And the result of the debate? A bigger document.
In short, the entire exercise was useless. In parallel to the requirements fiasco, an engineering team was already well into the project, virtually ignoring the written requirements. I have seen this practice of over doing requirements at several firms, including DoubleClick, Information Builders, and Ameritech. In most cases the resulting document is either too large to be useful, or is ignored by the developers.
Of course the reverse is often true too. In fact, the reverse is probably much more frequent. For example, my team recently received the following requirement for a component of a new product:
"Management reports – using multiple selection criteria, produce reports that track turnaround time and other metrics."
Thing about the number of questions that are opened by this simple one sentence requirement. What reports? How many? What columns, rows, sorting, totals? What selection criteria? What is turnaround time? What other metrics? In fact, this requirement tells a developer nothing. There is absolutely nothing the system architect can do except return to the business analyst and ask questions. To make matters worse, the business analyst believe he has provided a useful document.
Then there are requirements that mask the business need in technical details. Some might read something like:
"Add a column to the x table to hold a flag. Display the flag on the detail page. When transactions are received for records with the flag set to 0, ignore the transaction."
This might look like the appropriate middle ground, between the extreme examples illustrated earlier. But it's not. The business analyst has interpreted the business requirement and supplied a solution. Although many business analysts fancy themselves as system architects, I have yet to meet one who is better than the engineer. What is the requirements above? A method of disabling master records? A method to manually override batch operations?
The plain fact is it hard to write useful requirements. And a project with poor requirements is destined for delays and cost overruns. If you wish to keep your requirements meaningful, stick to the following rules of thumb:
- Describe the business need (not the technical solution)
- Keep it brief (it a single capability takes more than a couple pages then your too verbose)
- Describe everything (if there are four reports, describe each in with their own requirement)
When you avoid the pitfalls from poor requirements, your project will be constructed on a solid foundation. The chances of success are will be far greater.
Friday, January 04, 2008
SDLC
I suppose if it was easy, then there wouldn't be entire shelves at Barnes and Noble devoted to it. It it was easy, then everyone would deliver with great success. I'm becoming convinced that no single methodology is portable through development shops. And that successful SDLC (System Development Life Cycle) is a painful trial-and-error process that adopts aspects of several disciplines.
I've tried the standard waterfall. It generally has two problems. First, it is nearly impossible to completely define all the functions of a system prior to providing any code. And second it is very difficult to prevent scope creep.
In the first case, a business analyst has to be imaginative enough to define all aspects of the system. Then she must be able to accurately write these into requirements that are understandable by developers. I've never seen this done well. When overdone, the volumes of requirements become impossible of shift through, when underdone entire aspects of logic are left undocumented.
Waterfall project tend to be very long, running months or even years. This causes the inevitable panic when business analysts realize a pet feature is not included. The panic often results in scope creep, which in turn causes the project to run longer.
We've tried incremental methodologies, such as Agile too. Ok, you say, Agile isn't a methodology, but a class of methodologies. But does anyone really implement strict Extreme or Scrum or EVO?
Regardless, Agile methods have their built-in weakness too. For instance, how do you know when you are done? Or, as in the case with my current projects, resources get diverted into new critical projects, leaving others unfinished.
When it's all said and done, some hybrid method seems to work best. Concrete, but overlapping, phases are necessary for project management. Within each phase, iterative cycles with feedback have great benefit. Who knows, maybe I'll develop my own methodology, and then write a book (that no one will read).
Tuesday, April 24, 2007
What happened to OOP?
I have to admit that I am amazed at the percentage of developers who do not know the fundamentals of OOP. Even developers coming right out of school struggle with this conversation. This despite the fact that most of our entry level developers are coming out of Masters of Computer Science programs.
It has caused me to wonder if OOP is coming out of favor. If this is the case, then what is replacing it? Gang of four patterns? Something else?
You can say I am an old school developer. I learned to code when the style was structured programming. There was no concept of OOP when I earned my Computer Science degree. I was introduced to OOP several years later, when building my first applications for Windows. I immediately saw the maintain beauty of maintaining a single piece of reusable code, this was a logical extension of function libraries.
OOP was a natural evolution of structured programming, and yet I was amazed at the number of my colleagues that did not make the switch. And those who didn't were relegated to mainframe jobs and maintenance of legacy system. The best engineering opportunities were given to those who were evangelists of object oriented programming.
But as Internet development took off during the first dot com boom, a couple of trends started. One was the adoption of Visual Basic, and the other was design patterns from the gang of four (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides).
In my mind, Visual Basic is and was a horrible trend in the practice of software development. The language, especially in its' early versions, encourage poor programming practices. And Microsoft's CASE style designer tools only exacerbated the problem. Departmental developers from corporations picked up the easy to learn language, and churned out applications that were impossible to maintain. Although recent versions of the Basic language implement OOP constructs, traditional VB developers generally do not use them.
Design patterns should have been an advancement in OOP development. In practical use, however, programmers often use patterns without practicing OOP themselves. Much the same way that procedural developers use Java or .Net and still fail to implement OOP.
The problem for me, as a manager and leader of engineering teams, is finding people who write code that is easy to maintain. To a degree, I equate maintainability with re-usability because code that is reused is not rewritten. And code that is reused is tested frequently. Coders that do not understand or practice disciplined OOP will fall into the copy-and-paste trap. When this happens, many versions of similar code appear throughout the source code, creating a maintenance nightmare.
The irony here is that descriptions of object oriented programming are very common. Wikipedia, for instance has an entry for OOP that a developer could review and understand in a couple of minutes. I hope too, that Computer Science programs strive to instill these basic concepts to their students. In the meantime I continue my search for solid OOP engineers who will help evolve our products.
Tuesday, March 06, 2007
Searching for Code
Krugle searches code, tech pages, and projects. Code and projects return essentially the same thing; links to source code. Tech pages will find your search term in blogs and newsgroups.
I find the code search especially helpful. I have searched for grid computing, sorting, and EBCDIC; and have found useful code in each case. For me, the projects are most interesting, as I want a complete solution.
The site has a sweet Web 2.0 interface that includes plenty of Ajax. It also has allows sharing of notes. One point of contention though, I tagged several files with comments, but was unable to find them (my comments) later.
Google also has a code search capability too, but I believe it is inferior. In Google's case, the search results will highlight key words in source code files. True to Google form, its' code search is very spartan. Google does not have the Web 2.0 capabilities of Krugle, but it is fine for quick and dirty searches for very specific algorithm.
Kudos to Krugle. I suggest you add it to your bag of tricks.
Wednesday, February 28, 2007
What is a CTO?
First and foremost, the Chief Technology Officer is the technical visionary for the company. In this role he must be the evangelist for technology; keeping products and services competitive. He must set a clear path to achieve the goals for the vision. And he must assure that everyone knows the vision.
A great CTO will be a passionate advocate for best practices of engineering, quality assurance, and technology operations. Of course, to advocate best practices, he has to know the best practices. These practices will include agile development methods, test driven engineering, and thorough securing of infrastructure.
The CTO have intimate knowledge of the technologies required of his vision. He must be passionate about the platform, when the platform is specific; and agnostic of the platform, when the vision is independent of it. The best CTOs are not concerned about Microsoft vs Linux or Java vs .Net. Instead he pushes the platform needed to accomplish the goals for the company.
The best CTO are excellent managers and leaders. They recruit talented staff and have great retention rates. He understands the value of knowledge capital and continuously encourages learning. And he keeps his own knowledge sharp too.
Through vision and planning the CTO will instill confidence from other senior managers. He will keep his products and services best-in-breed. And he will give customers confidence that their solution will get better and better.
These are the points I should have made when asked..."What is a CTO"?
Friday, February 09, 2007
Implementing Agile Development
The bigger problem is moving a team steeped in serial waterfall methods to interative methods. Some people simply don't get it; they don't get collaboration; they don't get accepting change; and they don't get emphasizing software over documentation. All of which is surprising because most development teams never receive adequate documentation, constantly deal with change, and usually brain storm to solve problems.
For us de-emphasizing documentation shouldn't be so bad, after all, we don't receive good requirements anyway. But strangely enough, there are developers who still believe that one big master documents is necessary for successful projects. These people are wrong. It is wasteful and expensive to attempt to write a complete specification prior to engineering the software.
I saw this first hand a DoubleClick, Inc several years back. The firm hired a team (larger than my current department) dedicated to writing specs. The documents produced by this group were huge, numbering hundreds of pages. And the details were debated ad-nausium, leading to stagnation. What was produced was documents, what wasn't produced was working software.
A greater problem for us is managing multiple projects. Our technologies are not constructed on a common code base. Therefore each project becomes its' own set of increments. Currently we have five projects under development. That's a lot of work for a staff of seven plus four consultants. With all these concurrent projects, managing increments becomes difficult. Afterall, it isn't practical to deliver an increment every week. Or is it?
We seem to do alright with collaboration, but there is room for improvement. It helped to implement daily stand-up, or scrum, meeting. The meetings are short and focus on the goals for the day. We have not attempted to implement strict pair programming, althought there is very frequent teaming on tasks.
I remain a strong proponent of agile and iterative development. Over the next couple of months we will be able to take an objective look at the results of these methods.
Wednesday, January 24, 2007
Strategies for Performance
We're also taking steps to move off a pure client-server architecture, and to an n-tier architecture delivered through a browser. Typically our applications have a small number of users who submit long running queries. There are two primary stress points for performance, loading the data repository, and running queries (reports).
We are attacking the problem across several fronts. First, we're throwing hardware at it. Second, we are upgrading the OS and database platform. And finally, we are optimizing the applications. Note that we are not considering using a server farm for the application servers. We believe the low number of hits to the web apps make scaling the application server a lower priority.
Throwing hardware at the problem is the easiest and quickest way to scale. In our case, that means moving the application server to a separate box, and purchasing more power. More memory, more speed, and more processors. We all know, however, that this type of solution simply covers up bottlenecks in the application.
We are also stepping up to SQL Server 2005. In the standard edition, which most of our customers deploy, SQL Server 2005 will use 4 CPUs and as much memory as the OS can give it. Some of our customers are CPU and memory bound when using SQL Server 2000. Stepping up to 2005 is a significant boost. SQL Server 2005 also runs on Windows 2003 64 bit.
The 64 bit OS appears, in our sample testing, to give a huge performance boost. Unfortunately, it also gives us problems with some of our applications. Most significantly, we have not successfully deployed .Net 1.1 on the OS. Therefore, all our web applications must be migrated to Visual Studio 2005. We found that moving our web applications from VS 2003 to VS 2005 required some work. We are still trying to work through problems with deployment projects for these applications. Our Visual Basic legacy clients flat out do not run in the Terminal Services environment.
Finally, we are confronting the code in the applications themselves. The products have evolved from a client-server architecture. The software requires an active user that performs key functions synchronously. We will move file operations and reporting to asynchronous classes. This frees up the UI and gives the user a responsive experience. But asynchronous does not make queries run faster. Improving query performance requires reviews of the execution plan, indexes, and index views.
It is tedious work, but it will payoff in greater revenue.
You might also like ...
-
I remember almost nothing about the morning of September 11th. It was my son's first day of school, but I don't recall thinking abo...
-
Apparently Johns Hopkins research doctors have successfully removed a kidney through, um, the donor's oraffice. It's called "tr...
-
I am normally a proponent, and sometimes early adopter, of new technologies including Web Sites and Web Services. I'm also a believer in...