Hit Counter Code

Thursday, September 22, 2011

Role Model?

To be honest, I don’t care that Steve Jobs resigned.
Millions do, though, and the announcement sparked considerable praise regarding his vision, his strategy, and his leadership abilities. To most people he is an iconic figure worthy of emulation, although some do argue that Jobs is a terrible role model.
What kind of role model is Jobs? Or Vijay Mallya? Or Indra Nooyi? Who cares. Doesn’t matter. You and I, we don’t need a role model.
Kapil Dev, the cricket icon, would agree.
But we need role models — the more the better.
Apparently Jobs paid incredible attention to detail, drove outstanding results, ignored critics when he felt he was right, etc. All admirable qualities. Yet he also may not have been the most, um, sensitive and compassionate leader.
So do you want to be like Steve Jobs? Yes. And no.
That’s why looking for one role model doesn’t work. No matter how wonderful, people are still people, brilliance and flaws and talents and peccadilloes and all. Some traits are worthy of emulation, others are not.
So forget finding a role model. Instead look to different people for specific qualities or skills you want to emulate. Never start with a person; start with the talent or trait. Break attributes down into specifics, the narrower the better.
For example, I admire how the writer Arundhati Roy quickly establishes a scene and creates a mood. I admire how Salman Rushdie seamlessly blends information with story to foster understanding. I admire Winston Churchill’s wit and gift for phrasing. I can’t write like any of them — far from it. And I shouldn’t want to; I should be me, not them. But, if establishing a mood is critical, I can think, “How would Khushwant Singh handle this…?” drawing on his skills for guidance and inspiration.
And in the process hopefully become an even better me.
The same is true with speaking. When I speak I’m pretty casual. But at times, to make a complex point, I need to be more formal. Shiv Kherra is exceptional at marshaling facts, research, and examples to craft solid arguments. I can’t speak as well as Deepak Chopra — far from it — but I can draw on one aspect of his considerable skills to be a more effective speaker.
I can list lots more examples. I’ve been writing them down for years. Whenever I see someone do something well, I write it down. My list ranges all the way from the waitress who dealt with an overbearing customer to the CEO who handled an employee meeting that turned violent to Azim Premji’s knack for thinking about marketing as an integral part of creating substance.
You run into people who excel at something all the time, so you start your own list. Just don’t focus on the person, because you don’t want to be like them. Instead you just want to do something, usually a very specific something, the way they do.
If having a role model can help you be more successful, why not have dozens of role models? Then you can still be you and use your role models to be an even better you.

- Adapted from “Forget About Finding a Role Model” by Jeff Haden, a prolific writer of management books



Friday, September 9, 2011

Ten Presentation Probelms


REASON #10: It is all data, no story!

·          Diagnosis: You presented scads of information without any context or meaning.

·          Why you did it: You wrongly assumed a presentation was the same thing as a lecture.

·          What resulted: The audience pulled out their Blackberries when you clicked your fifth slide.

·          How to fix it: Make your presentation tell a story, ideally with the audience as the heroes.

REASON #9: Your slides are too fancy!

·          Diagnosis: You filled your slides with special effects and visual jim-cracks.

·          Why you did it: You were afraid that the audience would find you boring.

·          What resulted: Your audience watched the pretty pictures and missed what you were saying.

·          How to fix it: Use the minimum visuals that you need to tell the story.

REASON #8: Your slide background is too busy!

·          Diagnosis: You used a background template that was busy and obtrusive.

·          Why you did it: You wrongly thought it would make your slides look more “professional.”

·          What resulted: Your audience got headaches trying to see what was actually on each slide.

·          How to fix it: Use a simple, single color background. Always.

hREASON #7: Your fonts are unreadable!

·          Diagnosis: You used fonts that were too fancy or too small or both.

·          Why you did it: The fonts looked great on your computer; on the projector… not so much.

·          What resulted: The audience squinted and peered, and then gave up. Blackberry time!

·          How to fix it: Use large fonts in simple faces (like Ariel); avoid boldface, italics and UPPERCASE

REASON #6: Your graphics are too complex!

·          Diagnosis: You inserted giant, complicated graphics with lots of little details.

·          Why you did it: One picture is worth a thousand words, right? (Uh, wrong.)

·          What resulted: Your audience stared glassy-eyed, then pulled out their Blackberries.

·          How to fix it: Only include simple graphics; highlight the data point that’s important

REASON #5: You are all opinion, no fact!

·          Diagnosis: You expressed all sorts of opinions without any supporting data.

·          Why you did it: Laziness. It’s easy to claim “leadership”; it’s harder to actually be a leader.

·          What resulted: Your credibility with the audience leaped right down the toilet.

·          How to fix it: Only state opinions that you can back up with quantifiable data.

REASON #4: You speak fluent biz-blab!

·          Diagnosis: Your presentation was filled with tacky business buzzwords.

·          Why you did it: You wrongly thought the biz-blab made you sound “business-like.”

·          What resulted: Your audience thought you were pompous, crazy, and/or talking in tongues.

·          How to fix it: Just stop it. Cold turkey. Please

REASON #3: You drifted off topic!

·          Diagnosis: You included data and anecdotes that didn’t reinforce your message.

·          Why you did it: You didn’t bother to figure out what would really interest your audience.

·          What resulted: Your audience lost your train of thought and you lost credibility.

·          How to fix it: Only include material that’s relevant to your overall message

REASON #2: It was too d**n long!

·          Diagnosis: You presented way more than anybody wanted to know.

·          Why you did it: You were “spraying and praying” that something that would pique their interest.

·          What resulted: Zzzzzzzzzzzzzzzzz…

·          How to fix it: Always make your presentation less than half as long as you think it should be.

REASON #1: You read from your slides!

·          Diagnosis: You stood there like an idiot and read aloud what everyone could read for themselves.

·          Why you did it: You didn’t know the material so you needed your slides as a memory-jog.

·          What resulted: By your third slide, your audience was ready to strangle you.

·          How to fix it: Use slides to reinforce your message rather than to outline your data points.

-Adapted from Top 10 Reasons Your Presentation Sucks By Geoffrey James in Bnet.com

Tuesday, March 29, 2011

FMEA

Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all possible failures in a design, a manufacturing or assembly process, or a product or service.


“Failure modes” means the ways, or modes, in which something might fail. Failures are any errors or defects, especially ones that affect the customer, and can be potential or actual.

Failures are prioritized according to

1. How serious their consequences are,

2. How frequently they occur and

3. How easily they can be detected.



The purpose of the FMEA is to take actions to eliminate or reduce failures, starting with the highest-priority ones.

Failure modes and effects analysis also documents current knowledge and actions about the risks of failures, for use in continuous improvement. FMEA is used during design to prevent failures. Later it’s used for control, before and during ongoing operation of the process. Ideally, FMEA begins during the earliest conceptual stages of design and continues throughout the life of the product or service.

When to use FMEA

• When a process, product or service is being designed or redesigned, after quality function deployment.

• When an existing process, product or service is being applied in a new way.

• Before developing control plans for a new or modified process.

• When improvement goals are planned for an existing process, product or service.

• When analyzing failures of an existing process, product or service.

• Periodically throughout the life of the process, product or service

FMEA Procedure

1. Assemble a cross-functional team of people with diverse knowledge about the process, product or service and customer needs. Functions often included are: design, manufacturing, quality, testing, reliability, maintenance, purchasing (and suppliers), sales, marketing (and customers) and customer service.

2. Identify the scope of the FMEA. Is it for concept, system, design, process or service? What are the boundaries? How detailed should we be? Use flowcharts to identify the scope and to make sure every team member understands it in detail.

3. Fill in the identifying information at the top of your FMEA form.

4. Identify the functions of your scope. Ask, “What is the purpose of this system, design, process or service? What do our customers expect it to do?” Name it with a verb followed by a noun. Usually you will break the scope into separate subsystems, items, parts, assemblies or process steps and identify the function of each.

5. For each function, identify all the ways failure could happen. These are potential failure modes. If necessary, go back and rewrite the function with more detail to be sure the failure modes show a loss of that function.

6. For each failure mode, identify all the consequences on the system, related systems, process, related processes, product, service, customer or regulations. These are potential effects of failure. Ask, “What does the customer experience because of this failure? What happens when this failure occurs?”

7. Determine how serious each effect is. This is the severity rating, or S. Severity is usually rated on a scale from 1 to 10, where 1 is insignificant and 10 is catastrophic. If a failure mode has more than one effect, write on the FMEA table only the highest severity rating for that failure mode.

8. For each failure mode, determine all the potential root causes. Use tools classified as cause analysis tool, as well as the best knowledge and experience of the team. List all possible causes for each failure mode on the FMEA form.

9. For each cause, determine the occurrence rating, or O. This rating estimates the probability of failure occurring for that reason during the lifetime of your scope. Occurrence is usually rated on a scale from 1 to 10, where 1 is extremely unlikely and 10 is inevitable. On the FMEA table, list the occurrence rating for each cause.

10. For each cause, identify current process controls. These are tests, procedures or mechanisms that you now have in place to keep failures from reaching the customer. These controls might prevent the cause from happening, reduce the likelihood that it will happen or detect failure after the cause has already happened but before the customer is affected.

11. For each control, determine the detection rating, or D. This rating estimates how well the controls can detect either the cause or its failure mode after they have happened but before the customer is affected. Detection is usually rated on a scale from 1 to 10, where 1 means the control is absolutely certain to detect the problem and 10 means the control is certain not to detect the problem (or no control exists). On the FMEA table, list the detection rating for each cause.

12. Is this failure mode associated with a critical characteristic? (Critical characteristics are measurements or indicators that reflect safety or compliance with government regulations and need special controls.) If so, a column labeled “Classification” receives a Y or N to show whether special controls are needed. Usually, critical characteristics have a severity of 9 or 10 and occurrence and detection ratings above 3.

13. Calculate the risk priority number, or RPN, which equals S × O × D. Also calculate Criticality by multiplying severity by occurrence, S × O. These numbers provide guidance for ranking potential failures in the order they should be addressed.

14. Identify recommended actions. These actions may be design or process changes to lower severity or occurrence. They may be additional controls to improve detection. Also note who is responsible for the actions and target completion dates.

15. As actions are completed, note results and the date on the FMEA form. Also, note new S, O or D ratings and new RPNs.

- Extracted from ttp://asq.org/learn-about-quality/process-analysis-tools/overview/fmea.html

Saturday, February 26, 2011

The t-Test


The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups.
             Figure 1. Idealized distributions for treated and comparison group posttest values.


Figure 1 shows the distributions for the treated (blue) and control (green) groups in a study. The figure indicates where the control and treatment group means are located. The question the t-test addresses is whether the means are statistically different.

What does it mean to say that the averages for two groups are statistically different? Consider the three situations shown in Figure 2. The first thing to notice about the three situations is that the difference between the means is the same in all three. But the three situations don't look the same -- The top example shows a case with moderate variability of scores within each group. The second situation shows the high variability case. The third shows the case with low variability. Clearly, we would conclude that the two groups appear most different or distinct in the bottom or low-variability case. Why? Because there is relatively little overlap between the two bell-shaped curves. In the high variability case, the group difference appears least striking because the two bell-shaped distributions overlap so much.
                                     Figure 2. Three scenarios for differences between means.

This leads us to a very important conclusion: when we are looking at the differences between scores for two groups, we have to judge the difference between their means relative to the spread or variability of their scores. The t-test does just this.

Statistical Analysis of the t-test

The formula for the t-test is a ratio. The top part of the ratio is just the difference between the two means or averages. The bottom part is a measure of the variability or dispersion of the scores. This formula is essentially another example of the signal-to-noise metaphor in research: the difference between the means is the signal that, in this case, we think our program or treatment introduced into the data; the bottom part of the formula is a measure of variability that is essentially noise that may make it harder to see the group difference. Figure 3 shows the formula for the t-test and how the numerator and denominator are related to the distributions.
                                                           Figure 3. Formula for the t-test.

The top part of the formula the difference between the means. The bottom part is called the standard error (SE) of the difference. To compute it, we take the variance for each group and divide it by the number of people in that group. We add these two values and then take their square root. The specific formula is given in Figure 4:
                           Figure 4. Formula for the Standard error of the difference between the means.

The variance is of course simply the square of the standard deviation.

The final formula for the t-test is shown in Figure 5:
                                                         Figure 5. Formula for the t-test.

The t-value will be positive if the first mean is larger than the second and negative if it is smaller. Once we compute the t-value we have to look it up in a table of significance to test whether the ratio is large enough to say that the difference between the groups is not likely to have been a chance finding. To test the significance, we need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05. This means that five times out of a hundred we would find a statistically significant difference between the means even if there was no such difference (i.e., "chance" occurance). We also need to determine the degrees of freedom (df) for the test. In the t-test, the df is the sum of the persons in both groups minus 2. Given the alpha level, the df, and the t-value, we can look the t-value up in a standard table of significance (given below) to determine whether the t-value is large enough to be significant. If it is, we can conclude that the difference between the means for the two groups is different (even given the variability).

- From http://www.socialresearchmethods.net/kb/index.php







Sunday, February 13, 2011

Design of Experiments (DoE)

DoE is a third 'Advanced Tool' for problem solving.
Despite all the efforts by specialists in quality and statistics, Design Of Experiments (DOE) is still not applied as widely as it could and should be, because there is a wrong notion that it is too complex. We just need to know how the system, product or process will react if one factor is changed from one level to another level.


We can divide the experimentation process in four phases: setting-up the experiment, executing the tests, analyzing the results and drawing conclusions. We need to use basic rules from DOE to avoid mistakes.

Rule number one: write down the questions you would like to see answered by the experiment. E.g. does the "red tomato" fertilizer increase my tomato harvest by at least 20% in weight?

Rule number two: don’t forget that characteristics that are not part of the study also need to fulfill requirements.

E.g. as a result of changing fertilizer if we have 20 % more tomatoes, but they should not be of bad taste or small size. So at the end of the experiment we need to measure and evaluate these characteristics

Rule number three: make sure to have a reliable measurement system. You must be aware of the importance of the variation introduced by the measurement system and have to keep it at a minimum.

Rule number four: use statistics and statistical principles upfront. If you want to detect small differences the sample sizes increase drastically. For other cases, it can be smaller.

Rule number five: beware of known enemies. E.g. a tree causes shades on some tomato plants but not on the others. We can place half of them in the sun and half of them in the shade. In DOE this is called "blocking". For every known enemy we have to develop a strategy and keep it constant for the test.

Rule number six: beware of unknown enemies. E.g. In a garden, soil composition, effect of wind, ground water levels, etc. may or may not influence the result of our test. So the experiment is set up in such a way that these factors are distributed randomly, by chance. Randomizing within each block can be done by taking three black and three red playing cards, shuffle them and at each test location within the block pick one card. If it is a black card, treat that plant with "tomato lover", if it is a red card he treat it with "the red tomato".

This is a randomization in location, in many industrial tests, randomization in time is needed. This means that the sequence of executing the tests has to be decided by chance within each block.

Rule number seven: beware of what goes on during testing. With industrial there is no end to what can go wrong during testing. In many cases the people performing the tests have not been part of the team that designed it, they have no idea what it is about or sometimes even why it is done. So keep these two golden rules in mind:

1. He who communicates is king

2. Be where it happens when it happens.

Rule number eight: analyse the results statistically to find the mean and the standard deviation of the two types of treatment. Statistically we test the null hypothesis that the means are equal versus the alternative that the difference between the means is larger than the objective. This is done with a t-test.

If the result is positive, Sam would still have to analyze all the other characteristics that need to fulfill minimum requirements.

Rule number nine: present the results graphically. Since not all people involved in the experiment are knowledgeable of statistics, graphical presentation of results is so important in communicating. Actually, in most cases the graphical output will tell the whole story. Only when there is some doubt left, the correct numbers may be needed to take a final decision.

Conclusion

There is no such thing as a "simple" experiment. No matter how simple it may look, you need to take several rules into account if you want to be able to draw correct conclusions out of your tests. Don’t forget that it is equally expensive to run a bad or a good experiment. The only difference is that the good experiment has a return on investment.

- Ref: http://www.improvementandinnovation.com

Saturday, February 5, 2011

Regression Analysis

This is another of the 'Advanced Tools' for problem solving.
Here we apply regression analysis to some fictitious data, and we show how to interpret the results of our analysis.
Note: Regression computations can be done in the Excel spreadsheet also. For this example, however, we will do the computations "manually", to know the details of how it works.

Problem StatementLast year, five randomly selected students took a math aptitude test before they began their statistics course. The Statistics Department has three questions.
 What linear regression equation best predicts statistics performance, based on math aptitude scores?
 If a student made an 80 on the aptitude test, what grade would we expect her to make in statistics?
 How well does the regression equation fit the data?

How to Find the Regression Equation
In the table below, the xi column shows scores on the aptitude test. Similarly, the yi column shows statistics grades. The last two rows show sums and (arithmetic) mean scores that we will use to conduct the regression analysis.

The regression equation is a linear equation of the form: ŷ = b0 + b1x . To conduct a regression analysis, we need to solve for b0 and b1. Computations are shown below.
Therefore, the regression equation is: ŷ = 26.768 + 0.644x .

How to Use the Regression Equation
Once you have the regression equation, using it is a snap. Choose a value for the independent variable (x), perform the computation, and you have an estimated value (ŷ) for the dependent variable.

In our example, the independent variable is the student's score on the aptitude test. The dependent variable is the student's statistics grade. If a student made an 80 on the aptitude test, the estimated statistics grade would be:

ŷ = 26.768 + 0.644x = 26.768 + 0.644 * 80 = 26.768 + 51.52 = 78.288
Warning: When you use a regression equation, do not use values for the independent variable that are outside the range of values used to create the equation. That is called extrapolation, and it can produce unreasonable estimates.

In this example, the aptitude test scores used to create the regression equation ranged from 60 to 95. Therefore, only use values inside that range to estimate statistics grades. Using values outside that range (less than 60 or greater than 95) is problematic.

How to Find the Coefficient of Determination
Whenever you use a regression equation, you should ask how well the equation fits the data. One way to assess fit is to check the coefficient of determination, which can be computed from the following formula.

where N is the number of observations used to fit the model, Σ is the summation symbol, xi is the x value for observation i, x is the mean x value, yi is the y value for observation i, y is the mean y value, σx is the standard deviation of x, and σy is the standard deviation of y. Computations for the sample problem of this lesson are shown below.


A coefficient of determination equal to 0.48 indicates that about 48% of the variation in statistics grades (the dependent variable) can be explained by the relationship to math aptitude scores (the independent variable). This would be considered a good fit to the data, in the sense that it would substantially improve an educator's ability to predict student performance in statistics class. A coefficient of 1 indicates a perfect or 100% fit. A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally described as weak. These values can vary based upon the "type" of data being examined. A study utilizing scientific data may require a stronge correlation than a study using social science data.

- Ref AP* (Advanced Placement) Statistics Tutorial, http://stattrek.com/AP-Statistics-1/Regression-Example.aspx

Wednesday, January 26, 2011

Fault Tree Analysis (FTA)

This is one of the 'Advanced Tools' for Problem Solving.
A Fault Tree Analysis in one of its simplest forms is shown in the following example.
In that, any of the following failures will cause the system to fail:

•Failure of components 1 and 2.

•Failure of components 3 and 4.

•Failure of components 1 and 5 and 4.

•Failure of components 2 and 5 and 3.




- Adapted from http://www.weibull.com/basics/fault-tree/index.htm
Hit Counter Code

Monday, January 24, 2011

Process Decision Programme Chart (PDPC)

The Process Decision Program Chart (often just called PDPC) is a very simple tool with an unnecessarily impressive sounding name, possibly derived from the Japanese name, from where it came as one of the '7 Management Tools’.
A useful way of planning is to break down tasks into a hierarchy, using a Tree Diagram. PDPC simply extends this chart a couple of levels to identify risks and countermeasures for the bottom level tasks, as in the diagram below. Different shaped boxes are used to highlight the risks and and countermeasures (they are often shown as 'clouds' to indicate their uncertain nature).
Using PDPC is using a little rigour to identify possible problems and countermeasure in each area before diving into action.



1. Break down the task into a Tree Diagram. The bottom 'leaves' on the tree will now indicate the actual tasks to be carried out.
2. For each bottom-level task 'leaf', brainstorm or otherwise identify a list of possible problems that could occur.
3. Select one or a few of the risks identified in step 2 to put on the diagram, based on a combination of probability of the risk occuring and the potential impact, should the risk materialise.
4. For each risk selected in step, brainstorm or otherwise identify possible countermeasures that you could take to minimise the effect of the risk.
5. Select a practical subset of countermeasures identified in step 4 to put on the chart.
6. Continue building the chart as above, finding risks and countermeasures for each task. If there are a large number of tasks, you can simplify the task by only doing this for tasks that are considered to be at risk or where the impact of their failure would be alarge.

- Adapted from http://syque.com/quality_tools/tools/TOOLS12.htm

Saturday, January 22, 2011

Arrow Diagram

This is another of the ‘7 Management Tools’ and is used to show the sequence of a series of operations and the relationship among them. It makes the mind clear about the system and enables thought on improvements that can be initiated in the operations and their likely consequences on subsequent stages.
For ease of identity, the stages in the following sample diagram are shown in numbers; in an actual arrow diagram, the numbers would be replaced by the specific name of the operation.


Friday, January 14, 2011

Affinity Diagrams

Affinity Diagrams, another of the ‘7 Management Tools’ is used when a number of apparently unconnected points are raised for achieving an objective. One way of approach to form the Affinity Diagram (or the KJ diagram, named after Jiro Kawakita) is for different persons posting their opinions regarding the contributory aspects on to a board in Post-it slips and then organizing those of similar affinity under suitable headings.
The following example illustrates an Affinity Diagram for finding out “Features required for an improved Digital Camera” (from http://www.baran-systems.com/)

Tuesday, January 11, 2011

Matrix Data Analysis Method

Where to use Matrix Data Analysis Method which is another of the '7 Management Tools' in Problem Solving?
We use it when investigating factors which affect a number of different items, to determine common relationships.
We use it to determine whether or not logically similar items also have similar factor effects.
We use it to find groups of logically different items which have similar factor effects.
The example given below illustrates how a pharmaceutical combine examined the pain-killing drugs of its subsidiaries in terms of the cost to product and general efficacy.

Products which are high cost but are not of highest efficacy, viz C and D, are dropped. Low-cost drug of reasonable efficacy, i.e. E is promoted, and high-cost drugs have a project initiated to reduce production cost - in the case of A and B .

Thursday, January 6, 2011

Matrix Diagram

Matrix Diagram is another under the '7 Management Tools'. It is used when relationship between elements are known and it enables that to be seen at one glance.

A Sample of the Matrix Diagram is given below. It illustrates the relation between the technical aspects of a television set and the perception of the effect by the user.


Monday, January 3, 2011

Relations Diagram

This is the second one listed among the '7 Management Tools'.
It is used when causes and effects are very much inter-related, as in the following case illustrating a funds crunch situation in an industry.

Saturday, January 1, 2011

Tree Diagram

The earlier blog "Problem Solving Tools" has listed the various systems used and the Tree Diagram is shown as one of the '7 Management Tools'.
A sample of the Tree Diagram is reproduced below, showing the trunk, branches and leaves of a figurative tree. It is taken from the following site: http://www.syque.com/quality_tools/toolbook/Tree/example.htm


It illustrates how the method was used in improving the performance of a restaurant.
The points shown in the 'leaves' (of the 'tree') were used for collecting the performance data, setting of higher targets and improving the operation of restaurant.