Quality Quantification for Quality Engineering by Tom@Gilb.com www . GILB . Com For Brno Universities 28 March 2013 CE COVER.jpg Software Inspection Cover.jpg POSEM cover.jpg Y soft Logo.png Quantifying Music with Hi to Andrea Provaglio, Venice Tom Gilb Lightning Talk at ACCU, Oxford, 2012 Tom Gilb in Javazone 2012 Speakers info.png Lean QA Audience at ACCU “Surely you cannot quantify ‘Music’ ?” •I claimed –we can quantify any variable quality of any system • •I replied: –I’ll do it in a lightening talk here at ACCU Image.jpg What is the problem, in quantifying music? •Can you quantify this music? [USEMAP] Black-Eyed Peas song ”I gotta Feeling” gets 8.9 of 10 from Hit Song Science software Black eyed peas.pdf It'http://en.wikipedia.org/wiki/I October 12, 2009 - Many of us like to believe that there's a little magic behind the making of a hit single. Take a song like "I Gotta Feeling" by The Black Eyed Peas. That's a good song, judging by sales: _Gotta_Feelings on top of the Billboard pop chart. David Meredith, CEO of Music Intelligence Solutions, says there's no magic in that; it's math. His software, called Hit Song Science, gave the song a hit score of 8.9 out 10. "[It's] a series of algorithms that we use to look at what's the potential of a song to be sticky with a listener," Meredith says. "To have those patterns in the music that would correspond with what human brain waves would find pleasing." Meredith says his software found that hits have certain common patterns of rhythm, harmony, chord progression, length and lyrics. A study conducted by the Harvard Business School found that the software was accurate 8 out of 10 times. This summer, Music Intelligence launched a Web site for songwriters called Uplaya. David Bell, of the hip-hop duo the Block Scholars, paid $90 to use it. "To me, it's an unbiased validation of your music," Bell says. "It's not your family turning around and saying, 'Oh, you got a great song.' " The computer told Bell he had a 7.1 — good, but not great. So he went back to the studio and remixed. He got his score up to 7.6 — good for a platinum rating. He could hold his head up. "We can use Uplaya as a tool to figure out what song we want to put in a demo to send to these labels and stuff," Bell says. Against The Machine "From an artist's standpoint, a songwriter's standpoint, it's horrifying to me," says independent singer-songwriter Kym Tuvim. Tuvim says that she can't stand the star-making machine behind popular songs, and that she hates the idea of artists trying to fit songs into algorithms. "You'll find a decreasing amount of any kind of surprises in music," Tuvim says. "This just becomes a tool to make that narrowing of the field more accessible." Tuvim says her songs come from a mysterious place in her unconscious. She might not love the computer, but the computer loves her song "Flood." It got a 7.3 — that's platinum. Breaking The Mold It doesn't surprise New Yorker music critic Sasha Frere-Jones that a computer can predict hits, but he says it can't predict all the hits. Sometimes, songs come along that don't fit the mold. "I think of a song like 'Da Da Da' by Trio, which people love," Jones says. "They just love that song. And I can't imagine that at the time, in '80-'81, that the software would have given that a very high rating. It was sonically very small. It sounded like a kids' song. They might have told the band, 'No. No. No. No. No. Beef it up.' " The software still doesn't think it's a hit: "Da Da Da" got a 6. Jones worries that, if Hit Song Science plays too big a role in the music industry, a lot of good songs will never see the light of day. Music Intelligence Solutions CEO Meredith calls his software a democratizing force in music — sort of a computerized American Idol. If an unknown, unconnected artist gets a high score, it could get a leg up. Then, his company could help promote the artist with record labels. "We'll shine a spotlight on you," Meredith says. "You'll get recognized, and we'll get the word out, and that's probably a good way for the industry to work relative to it being, 'Who do you know?' It's more about what kind of talent level that you have." Meredith also notes that his software isn't writing the songs. Human beings do that — at least for now. http://www.npr.org/templates/story/story.php?storyId=113673324 http://www.google.co.uk/search?client=safari&rls=en&q=i+gotta+feeling&ie=UTF-8&oe=UTF-8&redir_esc=& ei=XLiZT6etEqeH4gSy8pDFBg Screen Shot 2012-04-27 at 06.57.00.png “There's no magic in that; it's math” •"[It's] a series of algorithms that we use •to look at what's the potential of a song •to be sticky with a listener ... •To have those patterns in the music that would •correspond with what human brain waves would find pleasing” CEO David Meredith • •A study conducted by the Harvard Business School found that the software was accurate 8 out of 10 times.http://www.npr.org/templates/story/story.php?storyId=113673324 • Measurable Attributes of Hits •Meredith says his software evaluates songs over sixty elements including • – Screen Shot 2012-04-27 at 06.57.00.png http://edition.cnn.com/2008/WORLD/europe/03/07/spiritof.music/ Melody Harmony Tempo Pitch Octave Beat Rhythm Fullness of sound Noise Brilliance Chord progression YouTube Measures •Number of Likes and Dislikes • 11,021 Likes, 371 Dislikes (April 26, 2012) • •Number of times video has been viewed • 5,942.649 Views (April 26, 2012) Screen Shot 2012-04-27 at 06.57.00.png Screen Shot 2012-04-27 at 06.28.11.png By Survey: Most Wanted Attributes •Yudkin reports on a web-based survey into American musical tastes conducted by Komar and Melamid in 1996 • •If you want to please the greatest number of Americans (72% ± 12%) consider –Male and female solo voices –R&B with a love theme –Small ensemble of musicians –Length of about 5 minutes –Moderate pitch, tempo and volume – –http://www.bu.edu/cfa/music/faculty/yudkin/ Most Unwanted Attributes –To appeal to only about 200 Americans •Extreme length •Wide range of dynamics, tempo and pitch in abrupt succession •An operatic soprano singing atonally •A cowboy song with political slogans •A children’s choir singing holiday songs •Large orchestra featuring harp, accordion and bagpipes • –http://www.bu.edu/cfa/music/faculty/yudkin/ –There are samples of two songs written by David Soldier with lyrics by Nina Mankin to these wanted and unwanted guidelines about 19 minutes into Yudkin’s lecture • This slide go ta big sustainedlaugh from the ACCU audience 27 April 2012 Some potentially quantifiable Quality dimensions of Music •Brainstormed by Steve F. and Rachel D. At lunch •In tune •Applause •Moving •Encores •Repeat Gigs •Busking Hat Collection •MRI Brain Scan •Downloads •Utube Reviews •Royalties •… (many more!!) •Examples in Planguage •Music.Moving: • •Type: primary music quality attribute • •Ambition Level: the majority of listeners feel moved to tears or strong physical emotional reactions. • •Scale: the % of defined [Listeners] hearing defined [Music] under defined [Environments] who reports a defined [Emotion] at a defined [Strength] • •Goal [1st UK Release, Music = Hip Hop, Environment = Itunes, Emotion = {Tears, Sadness}, Strength = Powerful] 50% ± 20% ? CE cover small.jpg Steve Freeman Rachel Davies Philolaus on Numbers •Over four hundred years BC, •a Greek by the name of •Philolaus of Tarentum said : •” Actually, everything that can be known has a Number; – for it is impossible to grasp anything with the mind or to recognize it without this (number).” • •Best regards (Aug 2005), N.V.Krishnawww.microsensesoftware.com Photo 2013 1 Software Engineering Productivity Study •An example of setting objectives for process improvement •For 1997 with 70% software labor development content in products • •. ericsson_logo_black.gif Picture 1.png Non-Confidential Main beam from a macrocell base station antenna Changed .com Feb 2004 http://www.mcluk.org/zmirou/zmirou2.htm Edit April 25 2008 tom gilb The problem •Great Market Growth Opportunities •Too Few Software Engineers • •Solution: –Increase productivity of existing engineers Picture 2.png ericssonrbs540.jpg http://www.mostlycommonsense.com/?paged=2 http://www.seas.harvard.edu/courses/es96/spring1997/web_page/tech/ericssonrbs540.jpg 2 The One Page Top Management Summary (after 2 weeks planning) •The Dominant Goal –Improve Software Productivity in R PROJECT by 2X by year 2000 –Dominant (META) Strategies –Continual Improvement (PDSA Cycles) –.DPP: Defect Prevention Process –.EVO: Evolutionary Project Management –Long Term Goal [1997-2000+] –DPP/EVO, Master them and Spread them on priority basis. –Short Term Goal [Next Weeks] –DPP [ RS?] –EVO [Package C ?] •Decision: {Go, Fund, Support} ol02pdsa2.GIF 13.jpg http://www.omnilingua.com/gif/photos/ol02pdsa2.GIF http://www.netpolarity.com/img/13.jpg 3 The Ericsson Quality Policy: •"every company shall define performance indicators (which) .. –reflect customer satisfaction, – internal efficiency –and business results. •The performance indicators are used in controlling the operation." •Quality Policy [4.1.3] ericsson_logo_black.gif Edited April 25 2008 4 Levels of Objectives. –Fundamental Objectives –Strategic Objectives –Means Objectives: – –Organizational Activity Areas. •Pre-study. •Feasibility Study. •Execution. •Conclusion. –Generic Constraints •Political Practical •Design Strategy Formulation Constraints •Quality of Organization Constraints •Cost/Time/Resource Constraints • LevelsOfLife.jpg http://www.chesterfield.k12.sc.us/Cheraw%20Intermediate/DaveEvans/BiologyICP/LevelsOfLife.jpg 4 Keeney’s: Levels of objectives. –1. Fundamental Objectives • (above us) –2. Generic Constraints •(our given framework) •Political Practical •Design Strategy Formulation Constraints •Quality of Organization Constraints •Cost/Time/Resource Constraints –3. Strategic Objectives • (objectives at our level) –4. Means Objectives: •(supporting our objectives) • Constraints Keeney,Ralph.jpg Picture 4.png http://meetings.informs.org/Seattle07/images/Keeney,Ralph.jpg 5 The Strategic Objectives (CTO level) –Support •the Fundamental Objectives (Profit, survival) •Software Productivity: –Lines of Code Generation Ability •Lead-Time: •Predictability. •TTMP: Predictability of Time To Market: •Product Attributes: •Customer Satisfaction: •Profitability: CTO_web-2.jpg http://www.thectoforum.com/images/CTO_web-2.jpg 6 ‘Means’ Objectives: –Support the Strategic Objectives •Complaints: •Feature Production: •Rework Costs: •Installation Ability: •Service Costs: •Training Costs: •Specification Defectiveness: •Specification Quality: •Improvement ROI: • "Let no man turn aside, ever so slightly, from the broad path of honour, on the plausible pretence that he is justified by the goodness of his end. All good ends can be worked out by good means."
Charles Dickens Picture 5.png 7 Strategies: (total brainstormed list) ‘Ends for delivering Strategic Objectives’ –Evo [Product development]: –DPP [Product Development Process]: Defect Prevention Process. –Inspection? –Motivation.Stress-Management-AOL –Motivation.Carrot –DBS –Automated Code Generation –Requirement -Tracability –Competence Management –Delete-Unnecessary -Documents –Manager Reward:? –Team Ownership:? –Manager Ownership:? – – •Training:? •Clear Common Objectives:? •Application Engineering area: •Brainstormed List (not evaluated or prioritized yet)? •Requirements Engineering: •Brainstormed Suggestions? •Engineering Planning: •Process Best Practices: •Brainstormed Suggestions? •Push Button Deployment: •Architecture Best Practices: •Stabilization: •World-wide Co-operation? • strategies.jpg Principles for Prioritizing Strategies •They are well-defined –Not vague – •The have some relevant predictable numeric experience –On main effects –Side effects –Costs –Risks - Uncertainty •Not huge spread of experience ITStrategy1.jpg www.alignedstrategy.com/ weblog/ITStrategy1.jpg New april 25 2008tsg programmer.gif 9 “Software Productivity” = Lines of Code Generation Ability –“Software Engineering net production in relation to corresponding costs.” –Ambition: Net lines of code successfully produced per total working hours needed to produce them. A measure of the – efficiency ('effective production/cost of production') of the organization in using its software staff. •Scale: [Defined Volume, kNCSS or kPlex] per Software Development Work-Hour. •Software Development: Defined: •Productivity calculations include Work-Hours for software engineering used in the The Company Execution Phase • Meter : –Comment: we know that real software productivity is not measured by lines of code, but we have consciously chosen this measure as it is available in our current culture. AB, PK, TG. –P1: Past [ 1997, ERA/AR ] < to be calculated when data available Volume/Work Hours> • Past-R PROJECT: Past [ 1997, R PROJECT ] < to be calculated when data available, available Volume/Work Hours > • Past-EEI: Past [1997, Ireland, Plex] ___??__ kPLEX / Work-Hour. • •Fail [end 1998, R PROJECT, Same Reliability] 1.5 x Past-R PROJECT <- R PROJECT AS 3 c " by 50%". –"50% better useful code productivity in 1.5 years overall" •Same Reliability: State: The Software Fault Density is not worse than with comparable productivity. Use official The Company Software Fault Density measures <- 1997 R PROJECT Balanced Scorecard (PA3). •Goal [Year=2000, R PROJECT, Same Reliability] 2 x Past-R PROJECT, – [Year=2005, RPL, Same Reliability] 10?? x Past-R PROJECT •Wish [Long term, vs. D pack.] 10 x Past-R PROJECT "times higher productivity" <- R PROJECT 96 1.1 c •Wish [undefined time frame] 1.5 x Past-R PROJECT <- R PROJECT AS 3 c " by 50%" –Comment: May 13 1997 1600, We have worked a lot on the Software Productivity objectives (all day) and are happy that it is in pretty good shape. But we recognize that it needs more exposure to other people. • Scale: [Defined Volume, kNCSS or kPlex] per Software Development Work-Hour. http://www.techtoons.com/images/programmer.gif 10 Lead-Time: •Lead-Time: –"Months for major Packages" •Ambition: decrease months duration between major Base Station package release. •Scale: Months from TG0, to successful first use for – major work station package. –Note: let us make a better definition. TG •Past [C Package, 1996?] 20? Months?? <-guess tg •Goal [D-package] 18 months <- guess tg •Goal [E-package and later] 10.8 Months <- R PROJECT 96 1.1 a "40% > D" •Goal [Generally] ??? <- R PROJECT AS 3a –"10% Lead-Time reduction compared to any benchmark". • POwhole.jpg http://iris.peabody.vanderbilt.edu/processoutcomes/image/POwhole.jpg 11 Predictability of Time To Market: •TTMP: Predictability of Time To Market: –Ambition: From Ideas created to customers can use it. Our ability to meet agreed specified customer and self-determined targets. –Scale: % overrun of actual Project Time compared to planned Project Time –Project Time: Defined: time from the date of Toll-Gate 0 passed, or other Defined Start Event, to, the Planned- or Actually- delivered Date of All [Specified Requirements], and any set of agreed requirements. –Specified Requirements: Defined: written approved Quality requirements for products with respect to Planned levels and qualifiers [when, where, conditions]. And, other requirements such as function, constraints and costs. –Meter: Productivity Project or Process Owner will collect data from all projects, or make estimates and put them in the Productivity Database for reporting this number. –Past [1994, A-package] < 50% to 100%> <- Palli K. guess. [1994, B-package] 80% ?? <- Urban Fagerstedt and Palli K. guess –Record [IBM Federal Systems Division, 1976-80] 0% <- RDM 9.0 quoting Harlan Mills in IBM SJ 4-80 –“all projects on time and under budget” – [Raytheon Defense Electronics, 1992-5] 0% <- RDE SEI Report 1995 Predictability. –Fail [All future projects, from 1999] 5% or less <- discussion level TG –Goal [All future projects, from 1999] 0% or less <- discussion level TG – value_proposition.jpg http://www.bitwiseglobal.com/images/value_proposition.jpg 12 Product Attributes: •Product Attributes: –“Keeping Product Promises.” –Ambition: Ability to meet or beat agreed targets, both cost, time and quality. (except TTMP itself, see above) •Scale: % +/- deviation from [defined agreed attributes with projects]. •Past [1990 to 1997, OUR DIVISION] at least 100% ??? – <- Guess. Not all clearly defined and differences not • tracked. TSG •Goal [Year=2000, R PROJECT] near 0% negative deviation <- TsG for discussion. Picture 6.png qc1.jpg http://itblagger.wordpress.com/2007/04/26/elements-of-the-future-business-ecosystem-part-ii/ http://www.gigawiz.com/images3/qc1.jpg TotalCustomerSatisfaction.jpg 13 Customer Satisfaction •Customer Satisfaction: – “Customer Opinion of Us” •Scale: average survey result on scale – of 1 to 6 (best) •Meter: The Company Customer –Satisfaction Survey •Past [1997] 4 •Goal [1998-9?] 5 <- R PROJECT 96 1.1 b • http://www.tpo.biz/ENG/images/TotalCustomerSatisfaction.jpg Edited in pict april 26 2008 tsg y1phyngmW81QCSrIv9Jfply5Wtcn9GvIb5Tu7NO4NGcSKBi9htatNApYdMy0-eHtxfbmgMSTiecJn8.jpeg 14 Profitability •Profitability: –“Return on Investment.” – –Ambition: Degree of saleable product ready for installation. – –Scale: Money Value of Gross Income derived by •[All R PROJECT Production OR • defined products] for • [Product Lifetime OR •a defined time period] –Goal: • http://byfiles.storage.live.com/y1phyngmW81QCSrIv9Jfply5Wtcn9GvIb5Tu7NO4NGcSKBi9htatNApYdMy0-eHtxfb mgMSTiecJn8 15 ‘Means Objectives’ Samples Same definition process as higher level objectives climbingstairs.gif http://www.vqleadership.com/Portals/3/images/climbingstairs.gif 16 Means Objectives •“support Strategic Objectives” •Summary: –'Means Objectives' are •not our major Strategic Objectives (above), •but each one represents areas which if improved –will normally help us achieve our Strategic Objectives. – Means Objectives have a lower priority than Strategic Objectives. –They must never be ‘worked towards’ • to the point where they reduce our ability to meet Strategic Objectives. climbingstairs.gif 17 Complaints •Complaints: – "Customer complaint rate to us" •Ambition: –Means Goal: for Customer Satisfaction (Strategic). •Scale: number of complaints per customer in [defined time into ] • •Past [Syracuse Project , 1997] ?? <- ML • •Goal [Long term, software component, in first 6 months in Operation] zero complaints <- R PROJECT 96 1.1 b • • "zero complaints on software features” •Impacts: complaints.jpg complaint1.jpg http://www.trackonweb.net/complaints.jpg http://blog.sellsiusrealestate.com/wp-content/complaint1.jpg 2852_ppa.jpg 18 Feature Production: •Feature Production: • "ability to deliver new features to customers" –Ambition: reverse our decreasing ability to deliver new features <- R PROJECT AS 1.1 – –Scale: Number of new prioritized delivered successfully to customer per year per software development engineer. – –Too Little: Past [1997] ?? "estimate needed, maybe even definition of feature" – –Goal [1998-onwards] Too Little + 30% annually?? <-For discussion purposes TsG. – –"we need to drastically change our ability to effectively develop SW" <- R PROJECT AS 1.1 http://blog.sellsiusrealestate.com/wp-content/complaint1.jpg 19 Improvement ROI: •Improvement ROI: – "Engineering Process Improvement Profitability" –Ambition: Order of magnitude return on investment in process improvement. • •Scale: • The average [annual OR defined time term] Return on Investment in Continuous Improvement as a ratio of [Engineering Hours OR Money] • •Note: The point of having this objective is to remind us to think in terms of real results for our process improvement effort, and to remind us to prioritize efforts which give high ROI. Finally, to compare our results to others. <-TsG • •Record • [Shell NL, Texas Instruments , Inspections] 30:1 <- Independently published papers TsG • •Past • [IBM RTP, 1995, DPP Process] 13:1 <- Robert Mays, Wash DC test conference slides TsG • [Raytheon, 1993-5, Inspection & DPP] $7.70:1 <- RDE Report page 51 ($4.48 M$0.58M) Includes detail on how calculated. PK has copy. •[IBM STL, early 1990's] Average 1100% ROI (11:1) <- IBM Secrets pp32. PK has copy. NB Conservative estimate. See Note IBM ROI below. • roi-cover.jpg 2004 http://davidfrico.com/roi-cover.jpg How to Quantify any Qualitative Requirement Diagram from ‘Competitive Engineering.’ book. Quantify Quality Quantification Methods 1 •Common Sense, Domain Knowledge –Decompose “until quantification becomes obvious”. –Then use Planguage specification: •Scale: define a measurement scale •Meter: define a test or process for measuring on the scale •Past: define benchmarks, old system, competitors on the scale •Goal: define a committed level of future stakeholder quality, on your scale. • Jan 16 2013 Screen Shot 2013-01-16 at 09.32.10.png Quality Quantification Methods 2, Look it up in a book • • • • – • CE COVER.jpg Screen Shot 2013-01-16 at 09.32.52.png Jan 16 2013 Screen Shot 2013-01-16 at 09.32.10.png Quality Quantification Methods 2, Look it up in a book • • • • – • CE COVER.jpg Screen Shot 2013-01-16 at 09.32.52.png Tool Collection: Scale: Clock hours for defined [Maintenance Instance: Default: Whoever is assigned] to acquire all defined [Tools: Default: all systems and information necessary to analyze, correct and quality control the correction]. Jan 16 2013 Quality Quantification Methods 3, Google It Screen Shot 2013-01-16 at 09.53.58.png Screen Shot 2013-01-16 at 09.54.21.png Quality: the concept, the noun Planguage Concept *125, Version: March 20, 2003 A ‘quality’ is – a scalar attribute -|-|-|-|- (Scale symbol) – reflecting ‘how well’ ------Past Level<-----------> – a system functions. (Fn)------Past Level<--------> • How well How much How much saved How good Quality is characterized by these traits (from CE book) 1.Quality describes ‘how well’ a function is done. 2. Quality describes the partial effectiveness of a function (as do all other performance attributes). 3. Quality is valued to some degree by some stakeholders of the system 4. More quality is generally valued by stakeholders; especially if the increase is free, or lower cost, than the value of the increase. 5. Quality attributes can be articulated independently of the particular means (designs) used for reaching a specific quality level – 6.even though all quality levels depend on the particular designs used to achieve them. 7. A particular quality can be a described in terms of a complex concept, consisting of multiple elementary quality concepts. 8. Quality is variable (along a definable scale of measure: as are all scalar attributes). 9. Quality levels are capable of being specified quantitatively (as are all scalar attributes). 10. Quality levels can be measured in practice. 11. Quality levels can be traded off to some degree; with other system attributes valued more by stakeholders. 12. Quality can never be perfect (100%), in the real world. 13. There are some levels of a particular quality that may be outside the state of the art; at a defined time and circumstance. 14. When quality levels increase towards perfection, the resources needed to support those levels tend towards infinity. 15. Added Feb 8 2003 after edit in CE book glossary ”Quality’ Quality is characterized by these traits 1.Quality describes ‘how well’ a function is done. 2. Quality describes the partial effectiveness of a function (as do all other performance attributes). 3. Quality is valued to some degree by some stakeholders of the system 4. More quality is generally valued by stakeholders; especially if the increase is free, or lower cost, than the value of the increase. 5. Quality attributes can be articulated independently of the particular means (designs) used for reaching a specific quality level – 6.even though all quality levels depend on the particular designs used to achieve them. 7. A particular quality can be a described in terms of a complex concept, consisting of multiple elementary quality concepts. 8. Quality is variable (along a definable scale of measure: as are all scalar attributes). 9. Quality levels are capable of being specified quantitatively (as are all scalar attributes). 10. Quality levels can be measured in practice. 11. Quality levels can be traded off to some degree; with other system attributes valued more by stakeholders. 12. Quality can never be perfect (100%), in the real world. 13. There are some levels of a particular quality that may be outside the state of the art; at a defined time and circumstance. 14. When quality levels increase towards perfection, the resources needed to support those levels tend towards infinity. 15. 9. Quality levels are capable of being specified quantitatively (as are all scalar attributes). Added Feb 8 2003 after edit in CE book glossary ”Quality’ Love Quantification a 4.5 minute lightening Talk at ACCU Conference, Oxford April 15 2010 Class Exercise: Aspects of Love, or Love is a many splendored thing! •METHOD –Make a list of love’s many aspects –Quantify one random requirement, for love •To show that all of the aspects can be similarly quantified See note for Sutra The Daily Sutra Love cannot be proved or disproved. Someone's actions and behavior is not proof for love. Many movie actors, actresses they all exhibit a lot of love---romance in the movie, but not a drop of romance or love would be there inside. They can just show it. Sri Sri Ravi Shankar: The Discipline of Yoga Translations and Previous Sutras: http://www.artofliving.org/DS.asp?MailingDate=2/11/2003 Love Attributes: Brainstormed By Dutch Engineers •Kissed-ness •Care •Sharing •Respect •Comfort •Friendship •Sex •Understanding •Trust •Support •Attention •Passion •Satisfaction •... •... •... http://shopping.yahoo.com/p_musicals_music_1921064813_formatinfohttp://shopping.yahoo.com/p_musical s_music_1921064813_formatinfo See ‘there is more to love’ (song) Lloyd Webber: Aspects of Love (musical) ========================== http://www.digitallyobsessed.com/showreview.php3?ID=4654 Love is a Many-Splendored Thing(1955)"'Tis ye, 'tis your estranged faces that miss the many-splendored thing."- Mark Elliott (William Holden)Review By: Jeff Rosado Published: May 04, 2003Stars: William Holden, Jennifer JonesOther Stars: Torin Thatcher, Isobel Elsom, Virginia Gregg, Phillip Ahn, Murray Matheson, Jorja Cutright,Donna Martell, Richard Loo, Soo Young, Leonard Strong,Candace Lee, Herbert HeyesDirector: Henry KingManufacturer: DVCCMPAA Rating: Not Rated for (nothing objectionable)Run Time: 01h:41m:51sRelease Date: May 06, 2003UPC: 024543060970Genre: romanceFind other reviews in this genre Trust Defined •Other aspects of Trust: •1. ‘Truthfulness’ –2. Broken Agreements –3. Late Appointments –4. Late delivery –5. Gossiping to Others • •Love.Trust.Truthfulness Ambition: No lies. Scale: Average Black lies/month from [defined sources]. Meter: independent confidential log from sample of the defined sources. Past Lie Level: Past [My Old Mate, 2004] 42 <-Bart Goal [My Current Mate, Year = 2005] Past Lie Level/2 Black: Defined: Non White Lies Camaraderie (Real Case UK) •Ambition: to maintain an exceptionally high sense of good personal feelings and co-operation amongst all staff: family atmosphere, corporate patriotism. In spite of business change and pressures. •Scale: probability that individuals enjoy the working atmosphere so much that they would not move to another company for less than 50% pay rise. •Meter: Apparently real offer via CD-S •Past [September 2001] 60+ % <- R & CD •Goal [Mid 2002] 10%, [End 2002] <1% <- R & CD •Rationale: • maintain staff number, and morale as core of business and business predictability for customers. My ‘Christian’ Friend •Lawrence Day. Seattle Washington •“Love is not quantifiable” –Not in Bible –Little guidance from God and Jesus Love: Biblical Dimensions <- Lawrence Day, Boeing •A person who loves acts the following way toward the person being loved: • •1. suffereth long •2. is kind •3. envieth not •4. vaunteth not itself, vaunteth...: •or, is not rash (Vaunt = extravagant self praise) •5. is not puffed up •6. Doth not behave itself unseemly •7. seeketh not her own •8. is not easily provoked •9. thinketh no evil •10. Rejoiceth not in iniquity (=an unjust act) •11. rejoiceth in the truth •12. Beareth all things •13. believeth all things •14. hopeth all things •15. endureth all things •16. never faileth The biblical citation (Book of First Corinthians, Chapter 13) I included gives the quantification of the term "love" (agape in Greek). The ‘quantification’ for love would be as follows: ------------> The actual passage reference in the Bible is the First Corinthians Chapter 13 Please click on the link below. Many thanks. http://www.biblegateway.com/passage/?search=1+Corinthians+13&version=KJV Thanks to Bishop Lawrence E. Day, Seattle August 22 2003 The biblical citation(Book of First Corinthians) I included gives the quantification of the term "love" (agape in Greek). The numbers beside the words reference words in Strong's Exhaustive Concordance. The quantification for love would be as follows: A person who loves acts the following way toward the person being loved: 1. suffereth long 2. is kind 3. envieth not 4. vaunteth not itself, vaunteth...: or, is not rash 5. is not puffed up 6. Doth not behave itself unseemly 7. seeketh not her own 8. is not easily provoked 9. thinketh no evil 10. Rejoiceth not in iniquity 11. rejoiceth in the truth 12. Beareth all things 13. believeth all things 14. hopeth all things 15. endureth all things 16. never faileth So, using these 16 quantifications for "love", a person could examine the long-term consistent behavioral actions of one person towards another and determine whether there was real agape love in the heart of the one claiming to love. Thus the quality of agape love can be measured by these quantified behaviors. So, my point is two fold. The first being that I support your contention that all qualities that humans deal with are quantifiable. The second is that the hardest of all, true selfless unconditional love, has been in fact quantified since antiquity. The definition and the measurement of the same has also been largely ignored by the world. I wonder if there's a correlation here. It may be that people like to have "qualities" unquantified, both professionally and personally. It leaves them more wiggle room. Dr. Lawrence E. Day, CQA, PMP --------------------------- While at the London conference, after you had emphasized the necessity of applying the skill of quantifying a quality, one of the participants told me that he did not believe you. One time after he had asserted to his boss that "you can't mange what you can't measure", his boss said what about trust. Trust is needed to successfully manage but you can't measure it. At the time, I didn't have the opportunity to dialog further, and he seemed pretty convinced of his position. So, basically, I've just dwelled upon the statement occasionally, and I have come to the conclusion that he is wrong. Basically, there are a set of behaviors that the "boss" would consider indicators that trust should or should not be conferred. Therefore trust can be quantified. An example might be that always completely filling out expense reports with all receipts attached and no errors might be one of the boss' measurable indicators of trust. Another might be that work is always done on time and meets the desired objective. So these quantifiers, although usually not documented are measurable indicators that make up the quality of trust. As I thought about it more, I came to the realization that probably the most difficult quality of all, "love", has been quantified for 2000 years. Here it is in the Bible in the Book of First Corinthians. (1Co 13:4) Charity26 suffereth long3114, and is kind5541; charity26 envieth2206 not3756; charity26 vaunteth4068, 0 not3756 itself4068, is5448, 0 not3756 puffed up5448, vaunteth...: or, is not rash (1Co 13:5) Doth807, 0 not3756 behave itself unseemly807, seeketh2212 not3756 her own1438, is3947, 0 not3756 easily provoked3947, thinketh3049 no3756 evil2556; (1Co 13:6) Rejoiceth5463 not3756 in1909 iniquity93, but1161 rejoiceth4796 in the truth225; in the truth: or, with the truth (1Co 13:7) Beareth4722 all things3956, believeth4100 all things3956, hopeth1679 all things3956, endureth5278 all things3956. (1Co 13:8) Charity26 never3763 faileth1601: . fail: Gr. vanish away There is further breakdowns that can occur, since many of the descriptive words, such as "kind", can be further quantified. What this does do, is make possessing the attribute of "love" an observable behavior that can be measured against these parameters. Are you easily angered at someone? If so, then that is observable quantifiable evidence that you don't truly love that person. So, in summary, I am more convinced than ever that the ability and need to quantify qualities is essential to the accomplishment of desired goals. Best Regards, Dr. Lawrence E. Day, CQA, PMP -Seattle WA, USA AUGUST 2003 "Day, Lawrence E" A Paper on ‘Love Quantified’ http://www.gilb.com/tiki-download_file.php?fileId=335 Screen shot 2010-04-15 at 18.15.19.png Screen shot 2010-04-15 at 18.19.22.png • Mathematical Models of Love & Happiness http://sprott.physics.wisc.edu/ lectures/love&hap/ (This talk) J. C. Sprott Department of Physics University of Wisconsin - Madison Presented to the Chaos and Complex Systems Seminar in Madison, Wisconsin on February 6, 2001 sprott4h heart smiley http://sprott.physics.wisc.edu/ lectures/love&hap/ (This talk) Steven H. Strogatz, Nonlinear Dynamics and Chaos (Addison-Wesley, 1994) sprott@juno.physics.wisc.edu Horror Project Requirements Case Based On Real Case 2006-8 Picture 3.png http://lostgarden.com/labels/Project%20Horseshoe.html units_of_measure.gif Summary of Top ‘8’ Project Objectives •Defined Scales of Measure: –Demands comparative thinking. –Leads to requirements that are unambiguously clear –Helps Team be Aligned with the Business • • © Tom@Gilb.com www.Gilb.com 1. Central to The Corporations business strategy is to be the world’s premier integrated service provider. 2. Will provide a much more efficient user experience 3. Dramatically scale back the time frequently needed after the last data is acquired to time align, depth correct, splice, merge, recompute and/or do whatever else is needed to generate the desired products 4. Make the system much easier to understand and use than has been the case for previous system. 5. A primary goal is to provide a much more productive system development environment than was previously the case. 6. Will provide a richer set of functionality for supporting next-generation logging tools and applications. 7. Robustness is an essential system requirement (see rewrite in example below) 8. Major improvements in data quality over current practices Real Example of Lack of Scales This lack of clarity cost them $100,000, 000 http://www.chariho.k12.ri.us/curriculum/MIsmart/matter/matter.html The Lesson •If management does not clarify the main reasons for a software development project, QUANTITATIVELY, •It can cost $100,000,000+ and 8 years of wasted time Picture 4.png http://chronicle.com/photos/v52/i27/5227-b36.jpg What the Project Manager Wanted after $160,000,000* was spent –“Able to add features without fear … –Able to improve code without fear … –Able to incorporate improved technology without fear … –Able to rapidly adapt to changing requirements … –Code that’s easy to maintain … –Code that’s uniform, easy to understand … –Code that’s readily and thoroughly testable …” – –* The number was sometimes quoted at $100 million, and by 2008 it was certainly much higher, no deliveries had taken place by May 2008. Picture 5.png http://artscene.textfiles.com/mirrors/GRAPE-DEMO-ARCHIVE/graphism/op/pixel/pixel-scared.png Where Do We Want to Be: •Easy to understand the system code and therefore maintain / update •Uniform Code. (Looks like it was written by one person) •Code standards for legibility, safety, best practices •Living component and utility library for rapid development and code reuse •Easily and comprehensively testable •Fearless addition of features and advanced capabilities •Rapid development without fear of regression •Ability to incorporate new patterns / technology easily (e.g. Moving from Database to MXX, or from RDBMS to OODBMS) •Loosely coupled components that can be modified or replaced with no 'ripple' throughout the system •Be able to handle changing requirements with minimal (if any) impact. (Includes the notion of easily reversible decisions) What the CIO Director Told Me •“In 1998 I voted to veto this project start because the requirements were insufficient. •But I was overruled by the other directors (including the current CEO)” Picture 6.png http://www.newscopy.org/images/lemming_rush_hour_1.jpg Main Hypothesis by Gilb: 1.The requirements are unacceptably unclear. •2. The project has proceeded to throw masses of detail (‘design’) at the unacceptably unclear requirements. •3. There is no objective way to decide if any of the built or planned detail is necessary or sufficient to meet the unclear requirements. •4. There is no point whatsoever in continuing the project on this basis (the bad requirements). –Because there is no way to determine if the project is progressing towards any reasonable goals. Wallpaper-11.jpg http://purity.chanlu.org/cgart/Wallpaper-11.jpg Suggested Practical Actions for HORROR Project. Six_Sigma_Phases-Define.gif 1.Stop all HORROR Project Effort based on the old plans 2. Adopt a new ‘policy’ for running this project 3.Quickly (in a week or 2) rewrite the top level requirements. 1. Review the current business and technical environment to see if new and different requirements are more appropriate than the current (3.13 2003 set) 2. Quantify all the top few objectives 3. Estimate the value of reaching the objectives 4. Get the objectives approved by top management 1.This is not the same as project funding approval. 2. It just says we would value reaching these objectives 3.And we don’t know of any better ones. 4. Let a ‘qualified’ system architect decide the best way to deliver the results. 1. The big question is how much, if any of the current HORROR project investment can be applied, and to what degree the results need to be evolved into the current customer product and environment. 2. Approve the architecture 5.Don’t ever pour money into the project unless real measurable improvements are promised and delivered in short cycles.! http://www.insyte-consulting.com/files/images/Six_Sigma_Diagrams/Six_Sigma_Phases-Define.gif 1. Seamless ROCKfield data and workflow •Central to THE CORPORATION’s ROCKfield business strategy is to be the world’s premier INTEGRATED ROCKfield service provider. Software is a key enabling technology towards providing this integration. As an active contributor to this overall strategy, Horror will provide the following: – Broad MINESITE data coverage. – Horror will be able to tap a broad variety of data about the well and its environment. Each of the Horror products will be able to store and exchange all of the following data types, e.g. wireline will be able to access MINING data, etc. These data types include: • •GILB COMMENT: There is no attempt to define ‘seamless’ quantitatively so that we can measure and track the final result. •The content of the rest of the requirement is an equally vague set of functional requirements (like “will support standard Windows OLE compound document functionality”). •It is not at all clear how well these things will be done (no performance or quality requirements for these are mentioned. •The result is likely to be that the function is there but has substandard user quality and performance. •We need to define the user experience – how fast, how easy. •We need to define the end state that would make us the worlds premier provider. •We have not even got close to it. DTExpress3.jpg http://www.halliburton.com/public/landmark/contents/Overview/images/DTExpress3.jpg 2. Dramatic boost in operational efficiency •GILB ANALYSIS: – � There is no unambiguous definition of ‘operational efficiency’ (no defined Scale or Scales of measure). – � There is no defined level on that (undefined) scale that tells us what is Dramatic ( and when it is dramatic ( short term levels, longer term levels, competitor levels). Goal, Stretch, Trend levels to use Planguage terms. – � The ‘efficient user experience’ is not at all defined in terms quantified –� In short this requirement completely fails, where is could have easily succeeded (in 1998) – to specify the level of operational efficiency that the product would measurably achieve. –� The rest of the specification with features like – ‘Automated depth adjustment for data acquired since last deviation survey’ –are merely suggested design elements, –that will only contribute to the operational efficiency – if they are well designed and implemented to a defined level of impact on – the (yet undefined quantified definition of operational efficiency). –These design ideas do not belong here at all – (this applies to all the requirements at this level). –They should be in a separate architecture or design specification, that suggested appropriate designs for – •HORROR will provide a –much more efficient user experience – by –automating a number of routine activities –and by removing restrictions on when or how a number of activities may be performed. – • These improvements include: •As-you-go product generation HORROR will provide the following features – to dramatically scale back the time frequently needed after the last data is acquired to time align, depth correct, splice, merge, recompute and/or do whatever else is needed to generate the desired products – by – semi-automating and/or performing these activities as the data comes in. Picture 7.png http://robocat.users.btopenworld.com/gravity.htm • In an interpretation environment these same features may be used to perform an interactive, end-to-end analysis encompassing all the tools provided by HORROR without the need to explicitly load, save and/or manually perform one by one each of the intermediate steps. • For example, the user could modify the depth correction and immediately see the effect on their final computed saturation over the currently displayed range. Similarly these features may be used to allow a log analyst or client to analyze data on their local computer as it is received in real-time from the MINEsite. Picture 7.png 3. Much easier to understand and use •A critical requirement for HORROR’s success is to make the software much easier to understand and use than has been the case for previous CORPORATION MINE software. •Benefits of this requirement include – reduced training time, better utilization of system features – and fewer operational errors. • As an aid in achieving this objective, HORROR has adopted a new use-case centric development process, – which makes the users and their use of the system a focal point of the development – The intent is to design for and evaluate usability continually during the development process rather than fixing it at the end. •(And it goes on about processes and designs) •Gilb Comment: essentially same criticism as above. This concept could be defined quantitatively (See Usability, Gilb CE Chapter 5, www.gilb.com download). • ‘To understand’ needs definition (scale) and ‘much easier’ needs specification of numeric points on the scale for various users and tasks. • The rest of the requirement makes the systemic mistake of diving into specific design detail (“Minimized panes., Docked and undocked panes, Product generation console” for example). • These are badly defined, and badly justified designs for an undefined problem. •We would end up building them into the system and there is no guarantee that we would end up getting the ‘operational efficiency’ we need ( since we have not even decided what we want!). http://www.michaelsampson.net/2007/01/index.html From a content point of view, HORROR will provide the following key usability features. Proposals for these features have been illustrated in much greater detail in the original HORROR Wireline Product Vision and Workflow Scenarios documents and more recently in the various use case models, storyboards, mockups and prototypes produced to date. (See the References section at the end.) In addition to the main areas mentioned here, smaller usability improvements will be made throughout the system, as ease of use is truly a global effort. � Console-based user interface (1) ァ HORROR will employ a console-based user interface design to present in an organized and easily accessible manner all the information required by the user to perform some high level activity, e.g. log a MINE. Consoles manage and curtail the problem of the potentially large number of windows that all want their own place on the screen by putting each window (or the means to access it) in it’s own especially reserved spot. Consoles contain windowpanes, which in turn display various kinds of information, e.g. a log, an equipment sketch or a display showing the position of the equipment in the borehole. Some general console features and the currently planned HORROR consoles are listed below. o 4. Greater software development productivity • “A primary goal of HORROR is to provide a much more productive software development environment than was previously the case. • In addition to traditional software development by professional software personnel, –this goal is aimed at facilitating the development of exploratory or custom software or reports by personnel such as tool or interpretation algorithm developers whose software expertise is more modest. • A related aspect of this goal is that the software development difficulty should scale, – i.e. simple applications should be easy to develop. •ァ GILB COMMENT: –� SAME COMMENTS AS ABOVE –� The Major concept (Productivity) is NOT defined. –No level of productivity is numerically and testably set. –It could easily be – (ask me how! ) • curmes.jpg http://www.jyu.fi/science/laitokset/fysiikka/en/research/accelerator/accelerator/control/curmes.jpg and that only developers who need very fine-grained control of the underlying functionality should be burdened with the complexity that such control brings. Below is a list of specific HORROR features or attributes directed at achieving this objective. (See also section .) ァ Comment: This does not mean, however, that HORROR will require less software developers than our current systems did, because while the developers will be more productive “per feature”, the target is also higher in terms of both richer functionality and greater ease of use. � Uses industry standard software components and tools (1) (and it goes on with design,see note) A central theme of HORROR development is to rely much more heavily on commercial software technology and ride on the industry’s coattails to reduce development costs and provide functionality that might otherwise be difficult or prohibitively expensive to develop in-house. This strategy is also motivated by the desire to facilitate hiring and training. ァ Comment: For MAXIS or IDEAL, the software developer had to master a very large body of CORPORATION-specific software technology, but otherwise generally needed to know only C and maybe Fortran. For HORROR, the software developer will need to know considerably less CORPORATION-proprietary technology (which will facilitate hiring and training), but the amount of commercial (generally Microsoft) technology they will need to be proficient in will grow accordingly. New hires can be chosen to already possess most of these skills; however, current employees will need to be allocated time for training. � Built using Microsoft technology (1) ァ HORROR will be built with and upon mainstream Microsoft technology; i.e. currently Windows 2000, Visual C++, Visual Basic, COM+, etc. with a gradual migration to .Net expected starting in 2001. � Incorporates 3^rd party software products wherever possible (2)……” ----- More detail on Gilb comment キ See my Ericsson example on request). You have to tailor the productivity definition to specific types of productivity for specific people types. � A mass of nice design ideas are listed which may or may not deliver the undefined productivity. There is not evidence or specific numeric assertion of how much productivity or what kind they will deliver, or what their benefit to cost will be. � This is a recipe for endless incorporation of nice sounding ideas (“developed as a set of binary software components” ) with no particular end in sight, and no impressive productivity improvement. At best these ideas need to be in a design or architecture document and NOT here in the (undefined) requirements. Finally they need to be directly related quantitatively, using an impact estimation table (see Gilb CE) to estimate and finally during project development track and measure actual progress towards the requirement Goal levels. 5. Rich support for next-generation tools and applications • “HORROR will provide – a richer set of functionality – for supporting •next-generation logging tools • and applications. • •Provided features include: – Richer equipment model • HORROR will •provide a – richer equipment model that – better fits modern hardware configurations. • •GILB COMMENT: – Total lack of quantified definition of what this “Supportability” is. •It could easily be defined as a clear quantified requirement. – Masses of nice sounding gratuitous design ideas –unjustified in relation to the (undefined) requirement. – A license to keep on implementing all these things endlessly – with no end in sight –and no responsibility for costs or effects. • specs.png http://www.w3.org/WAI/intro/specs.png Multiple copies of the same tool in tool string or bottom hole assembly (1) HORROR will support multiple copies of the same tool in the tool string or bottom hole assembly, e.g. multiple MDT modules of the same type, multiple borehole seismic receivers, etc. Although not technically an equipment model issue, this functionality will also support multiple copies of the same OSDD data channels produced within a single equipment string, e.g. two tools which both produce GR’s. ァ Comment: This functionality has traditionally been provided by hard coding a predetermined number of copies of the tool in question and providing alternate names for the duplicate output channels. This workaround has led to a number of problems including: (1) Whoever wants to read the DLIS file is confronted with duplicate data channels that measure essentially the same quantity but have different names. (2) If the predetermined number of copies isn’t enough another application kit is required to run more – not a quick process. (3) Software maintenance becomes error-prone as the same functionality is replicated in the code. ァ Comment: For this to work the tool hardware must also support multiple copies of the tool in the tool string. Only certain hardware (like MDT) does this. � Better support for modular tools (1) [W]…. “ 6. Rock solid robustness • While robustness is an essential HORROR requirement in all its uses, it is especially critical in MINING applications where the much longer job durations afford software defects (e.g. memory leaks) a greatly expanded opportunity to surface. • In this regard, •HORROR will provide the following features or attributes: – Minimal down-time • A critical HORROR objective is to have minimal downtime due to software failures. •This objective includes: – Mean time between forced restarts > 14 days • HORROR’s goal for mean time between forced restarts is greater than 14 days. • Comment: This figure does not include restarts caused by hardware problems, e.g. poorly seated cards or communication hardware that locks up the system. MTBF for these items falls under the domain of the hardware groups. – Restore system state < 10 minutes • Log scripts and test scripts, subsystem tests – Built-in testability • HORROR will provide the following features and attributes to facilitate testing. – Tool simulators • • GILB COMMENT: – For once a reasonable attempt was made to quantify the meaning of the requirement! – But is could be done much better – – As usual the set of designs to meet the requirement do not belong here. –And none of them make any assertion about how well (to what degree) they will meet the defined numeric requirements. – And as usual another guarantee of eternal costs on pursuit of a poorly defined requirements is most of the content. Picture 1.png http://www.megware.de/img/seiten/cluster/hoch_verfuegbar.jpg While robustness is an essential HORROR requirement in all its uses, it is especially critical in drilling applications where the much longer job durations afford software defects (e.g. memory leaks) a greatly expanded opportunity to surface. In this regard, HORROR will provide the following features or attributes: “Rock Solid Robustness” Defined Clearly in Planguage over a beer •Rock Solid Robustness: •Type: Complex Product Quality Requirement. •Includes: { Software Downtime, Restore Speed, Testability, Fault Prevention Capability, Fault Isolation Capability, Fault Analysis Capability, Hardware Debugging Capability}. • • 761917-Some-of-the-famous-granite-rocks-1.jpg beer-glass-half-moon-bay-2007.jpg http://img2.travelblog.org/Photos/17776/71146/t/761917-Some-of-the-famous-granite-rocks-1.jpg Software Downtime: •Software Downtime: •Type: Software Quality Requirement. •Ambition: to have minimal downtime • due to software failures <- HFA 6.1 •Issue: does this not imply that there is a system wide downtime requirement? • •Scale: • •Fail [Any Release or Evo Step, Activity = Recompute, Intensity = Peak Level] 14 days <- HFA 6.1.1 • •Goal [By 2008?, Activity = Data Acquisition, Intensity = Lowest level] : 300 days ?? •Stretch: 600 days • Globe-&-Clock.png Picture 2.png Restore Speed: •Restore Speed: •Type: Software Quality Requirement. • •Ambition: Should an error occur (or the user otherwise desire to do so), Horizon shall be able to restore the system to a • previously saved state in less than 10 minutes. <-6.1.2 HFA. • •Scale: Duration from Initiation of Restore to Complete and verified state of a defined [Previous: Default = Immediately Previous]] saved state. • •Initiation: defined as {Operator Initiation, System Initiation, ?}. Default = Any. • •Goal [ Initial and all subsequent released and Evo steps] 1 minute? • •Fail [ Initial and all subsequent released and Evo steps] 10 minutes. <- 6.1.2 HFA • •Catastrophe: 100 minutes. 69.jpg Picture 5.png http://www.apstraining.com/images/69.jpg •Testability: •Type: Software Quality Requirement. •Version: 20 Oct 2006-10-20 •Status: Demo draft, •Stakeholder: {Operator, Tester}. •Ambition: Rapid-duration automatic testing of , with extreme operator setup and initiation. • •Scale: the duration of a defined [Volume] of testing, or a defined [Type], by a defined [Skill Level] of system operator, under defined [Operating Conditions]. • •Goal [All Customer Use, Volume = 1,000,000 data items, Type = WireXXXX Vs DXX, Skill = First Time Novice, Operating Conditions = Field, {Sea Or Desert}. <10 mins. • •Design Hypothesis: Tool Simulators, Reverse Cracking Tool, Generation of simulated telemetry frames entirely in software, Application specific sophistication, for drilling – recorded mode simulation by playing back the dump file, Application test harness console <-6.2.1 HFA • Testability: Picture 6.png http://users.skynet.be/ronnydewinter/SQA-img/TheSoftwareQualityIceberg.jpg So once again, what is testability, exactly? Although testability is mentioned in the abstract of the recent WCAG 2.0 working draft documents and expanded in the “Conformance” section, a full definition sits not in the glossary but in the Requirements for WCAG 2.0 Checklists and Techniques, dated 7 February, 2003. Within this document, you will find the only definition of testability as it applies to WCAG 2.0. Here’s that definition:Definition: Testable: Either Machine Testable or Reliably Human Testable.Definition: Machine Testable: There is a known algorithm (regardless of whether that algorithm is known to be implemented in tools) that will determine, with complete reliability, whether the technique has been implemented or not. Probabilistic algorithms are not sufficient.Definition: Reliably Human Testable: The technique can be tested by human inspection and it is believed that at least 80% of knowledgeable human evaluators would agree on the conclusion. The use of probabilistic machine algorithms may facilitate the human testing process but this does not make it machine testable.Definition: Not Reliably Testable: The technique is subject to human inspection but it is not believed that at least 80% of knowledgeable human evaluators would agree on the conclusion. The Confirmit Case Study 2003-2013 •See paper on this case at www.gilb.com • Papers/Cases/Slides, Gilb Library, • value slide w… http://www.gilb.com/tiki-download_file.php?fileId=152 • ppr wrong ag… http://www.gilb.com/tiki-download_file.php?fileId=50 • Paper Firm http://www.gilb.com/tiki-download_file.php?fileId=32 •And see papers (IEEE Software Fall 2006) by Geir K Hanssen, SINTEF • •Their product = • • •Chief Storyteller = Trond Johansen Real Example of 1 of the 25 Quality Requirements •Usability.Productivity (taken from Confirmit 8.5, performed a set of predefined steps, to produce a standard MR Report. •development) –Scale for quantification: Time in minutes to set up a typical specified Market Research-report –Past Level [Release 8.0]: 65 mins., –Tolerable Limit [Release 8.5]: 35 mins., –Goal [Release 8.5]: 25 mins. • Note: end result was actually 20 minutes J –Meter [Weekly Step]: Candidates with Reportal experience, and with knowledge of MR-specific reporting features Trond Johansen Shift: from Function to Quality •Our new focus is on the day-to-day operations of our Market Research users, –not a list of features that they might or might not like. 50% never used! – We KNOW that increased efficiency, which leads to more profit, will please them. –The ‘45 minutes actually saved x thousands of customer reports’ •= big $$$ saved •After one week we had defined more or less all the requirements for the next version (8.5) of Confirmit. • FIRM (Future Information Research Management, Norway) project step planning and accounting: using an Impact Estimation Table •IET for MR Project – Confirmit (<-FIRM Product Brand) 8.5 •Solution: Recoding –Make it possible to recode variable on the fly from Reportal. –Estimated effort: 4 days –Estimated Productivity Improvement: 20 minutes (50% way to Goal) –actual result 38 minutes (95% progress towards Goal) Trond Johansen EVO Plan Confirmit 8.5 in Evo Step Impact Measurement 4 product areas were attacked in all: 25 Qualities concurrently, one quarter of a year. Total development staff = 13 • 9 8 3 3 toy soldiers and cannon 2 Trond Johansen Confirmit Evo Weekly Value Delivery Cycle merry go round figurine 2 machinists Evo’s impact on Confirmit product qualities 1st Qtr •Only 5 highlights of the 25 impacts are listed here jet airplane 12 equestrian sculpture 4 Release 8.5 Initial Experiences and conclusions • •EVO has resulted in –increased motivation and –enthusiasm amongst developers, –it opens up for empowered creativity • •Developers –embraced the method and –saw the value of using it, –even though they found parts of Evo difficult to understand and execute kissing couple figures 2 Trond Johansen Project leaders feel: Defining good requirements can be hard. It was hard to find meters which were practical to use, and at the same time measure real product qualities. Sometimes we would like to spend more than a day on designs, but this was not right according to our understanding of Evo. (Concept of backroom activity was new to us) Sometimes it takes more than a week to deliver something of value to the client. (Concept of backroom activity was new to us) We launched our first major release based on Evo in May 2004 (Rel. 8.5) and we have already gotten feedback from users on some of the leaps in product qualities. E.g. the time for the system to generate a complex survey has gone from 2 hours (=wait for the system to do work) to 15 seconds! man kicking Conclusions - •The method’s positive impact on Confirmit product qualities has convinced us that –Evo is a better suited development process than our former waterfall process, and –we will continue to use Evo in the future. •What surprised us the most was –the method’s power of focusing on delivering value for clients versus cost of implementation. – Evo enables you to re-prioritize the next development-steps based on the weekly feedback. –What seemed important • at the start of the project •may be replaced by other solutions •based on knowledge gained from previous steps. •The method has –high focus on measurable product qualities, and •defining these clearly and testably, requires training and maturity. –It is important to believe that everything can be measured, • and to seek guidance if it seems impossible. binoculars 28 mill voltmeter 3 Trond Johansen Evo’s impact on Confirmit 9.0 product qualities Results from the second quarter of using Evo. 1/2 Productivity Intuitiveness Product quality Time reduced by 38% Time in minutes for a defined advanced user, with full knowledge of 9.0 functionality, to set up a defined advanced survey correctly. Probability increased by 175% Probability that an inexperienced user can intuitively figure out how to set up a defined Simple Survey correctly. Customer value Description Productivity Product quality Time reduced by 83% and error tracking increased by 25% Time (in minutes) to test a defined survey and identify 4 inserted script errors, starting from when the questionnaire is finished to the time testing is complete and is ready for production. (Defined Survey: Complex survey, 60 questions, comprehensive JScripting.) Customer value Description 77 Evo’s impact on Confirmit 9.0 product qualities Results from the second quarter of using Evo. 2/2 Number of responses increased by 1400% Number of responses a database can contain if the generation of a defined table should be run in 5 seconds. Performance Number of panelists increased by 700% Ability to accomplish a bulk-update of X panelists within a timeframe of Z second Scalability Performance Product quality Number of panelists increased by 1500% Max number of panelists that the system can support without exceeding a defined time for the defined task, with all components of the panel system performing acceptable. Customer value Description Code quality – ”green” week Confirmit (2005) Norway decided to design ‘ease of change’ in, to a legacy system, in one-week delivery-cycles, per month, using ‘Evo’ Agile ‘Refactoring to reduce technical debt’ -> Re-Engineering •In these ”green” weeks, some of the deliverables will be less visible for the end users, but more visible for our QA department. •We manage code quality through an Impact Estimation table. Speed Maintainability Nunit Tests PeerTests TestDirectorTests Robustness.Correctness Robustness.Boundary Conditions ResourceUsage.CPU Maintainability.DocCode SynchronizationStatus I think he said these were every 4th or 5th week, TG May 7 2005 Edited March 27 2013 What is ‘Architecture’ ? Presented Javazone Oslo Sept 2011 © Gilb.com What is your personal best definition? Explain, in 1 sentence, what you do as an architect Can you refer to an international ‘standard’ for the concept ‘Architecture’ ? Did you know that the origin of the term is Master builder (byggmester) ? Greek arkhitektōn, from arkhi- ‘chief’ + tektōn ‘builder.’ Slide done 7 Sept 2011 tsg ORIGIN mid 16th cent.: from French architecte, from Italian architetto, via Latin from Greek arkhitektōn, from arkhi- ‘chief’ + tektōn ‘builder.’ Architect = Master Builder •Architect is from ‘Archi-Tecton,’ –which means ‘Master Builder’. – – •‘Archi’ is not from ‘Arch’, –but from ‘Arche’: primitive, original, primary. • Contributed by Niels Malotaux August 27 2002 Master builder (byggmester) ? Greek arkhitektōn, from arkhi- ‘chief’ + tektōn ‘builder.’ •The architecture is there to satisfy requirements • •The closer an object is to fulfilling its purpose, the closer it is to perfection. •Aristotle’s Belief Presented Javazone Oslo Sept 2011 © Gilb.com one of Aristotle’s principles; “The closer an object is to fulfilling its purpose, the closer it is to perfection” via simon wright May 2012 http://www.thefreeresource.com/aristotle-facts-information-and-resources-about-the-great-philosophe r In the section What did Aristotle believe about human nature? One of Aristotle’s prime beliefs was that everything in nature has a defined purpose. A knife is to cut; an eye is to see. The closer an object is to fulfilling its purpose, the closer it is to perfection. He saw man as being the highest form of existence as man was the only rational being on earth. This ideology led him to conclude that all lower forms of life existed in order to serve the needs of mankind. This reasoning led him to support slavery, especially of non-Greek or barbarian tribes, who he saw as being inferior and less rational than the Greeks. Oslo Opera house requirements •Qualities •Costs • • • •Constraints Presented Javazone Oslo Sept 2011 © Gilb.com Slide made 7 Sept 2011 tsg Oslo Opera house requirements (guess) •Qualities –Impressive –Acoustics –Flexibility –Extendibility –Integratedness –Performance Visibility –National Symbol –Access to Fjord View –Comfort – •Costs –Building –Maintenance –Operational manpower •Constraints –Legal Building –National Architecture –Archeological Site –Local Materials –Local Labour Presented Javazone Oslo Sept 2011 © Gilb.com Slide made 7 Sept 2011 tsg •The architecture is there to satisfy requirements Presented Javazone Oslo Sept 2011 © Gilb.com Architecture that never refers to necessary qualities, performance characteristics, costs, and constraints Is not really architecture Of any kind •The architecture is there to satisfy requirements Presented Javazone Oslo Sept 2011 © Gilb.com The Architecture process is driven by requirements Real (IT/Sw) Architecture •Real Architecture •Has multidimensional clear design performance objectives •Has clear multiple constraints •Produces architecture ideas which enable and permit objectives to be met reasonably within constraints •Estimates expected effects •Pseudo Architecture •Lacks dedication to clear objectives and constraints •Does not estimate or articulate the expected effects, on objectives & constraints, of suggestions Presented Javazone Oslo Sept 2011 © Gilb.com Slide done August 2011 tsg Pseudo Architecture Does not mention goals and constraints •‘Bad’ ‘Arch.’ definitions •Software architecture is a collection of software components unified via interfaces into decomposable system based on one or more technology platforms. •Software Architecture shows the structural and behaviour of a system which is comprised of software elements and exposing the properties of those elements and relationships among them. •Uninformative diagrams Screen Shot 2011-08-19 at 22.36.40.png Presented Javazone Oslo Sept 2011 © Gilb.com http://www.sei.cmu.edu/architecture/start/community.cfm http://www.sei.cmu.edu/architecture/start/community.cfm source of quotes Better Architecture •Better definitions •Real Architecture diagrams Presented Javazone Oslo Sept 2011 © Gilb.com http://www.sei.cmu.edu/architecture/start/community.cfm Screen Shot 2011-08-19 at 22.52.02.png •Software …needs to address the needs of business stakeholders within the organizational, technical and any other constraints to achieve the business, technical or any other goals. – It also needs to address software trustworthy characteristics like reliability, availability, maintainability, robustness, safety, security and survivability. – • System Architecture should contain goals/requirements artifacts, and structure and behavior artifacts based on those goals. http://www.sei.cmu.edu/architecture/start/community.cfm source of quotes 19 Aug 2011 tsg initial draft http://www.bredemeyer.com/definiti.htm Other definitions A Distinction •Architecture Process •A continuous, and lifecycle long, activity of finding means for ends •Architecture Specification •A specification of –a set of means –for a set of ends Presented Javazone Oslo Sept 2011 © Gilb.com 19 Aug 2011 tsg initial draft We argue that the following are absolute essentials for ‘real’ architecture •Architecture Process has •Clear multiple objectives •Clear constraints •A process of identifying and analyzing (estimating effects of) potential means –For reaching objectives, within constraints •Architecture Specification has •Well defined components –Able to deliver predictable attributes •Credible estimates of the multiple effects of each component, and the whole Presented Javazone Oslo Sept 2011 © Gilb.com 19 Aug 2011 tsg initial draft http://www.google.com/imgres?imgurl=http://www.enterprise-architecture.info/Images/WEB%2520Architec ture%2520Process%2520Cycle/apc013.gif&imgrefurl=http://www.enterprise-architecture.info/Images/WEB% 2520Architecture%2520Process%2520Cycle/WEB%2520Architecture%2520Process%2520Cycle%25202001-02-01.ht m&h=335&w=604&sz=22&tbnid=_uX2nbE4621TLM:&tbnh=73&tbnw=131&prev=/search%3Fq%3Darchitecture%2Bproces s%2Bimages%26tbm%3Disch%26tbo%3Du&zoom=1&q=architecture+process+images&docid=8Er-3Vj0aDg_WM&sa=X&ei =fWxnTu_IOojl4QShovm8DA&ved=0CCAQ9QEwAg&dur=593 Institute for Architecture Enterprise Development (source of ill.) Why are these Architecture essentials, essential? •Why? •Failure to reach even one ‘critical’ objective can mean total system failure –Example: reliability •Failure to respect even a single constraint can mean total system failure –Example: cost •And if they are missing… •You cannot expect the specified architecture will reach objectives, within constraints •You have lost architectural control Presented Javazone Oslo Sept 2011 © Gilb.com 19 Aug 2011 tsg initial draft What a Difference •A Real Architect •Can and does estimate resources needed for any suggested architecture –Capital Cost –Maintenance Cost –Skilled People hours to install and maintain •Can and Does estimate the impact of each architecture component on the top level critical objectives –All ‘-ilities’ (security etc) –All Performance (Capacity • •A False Architect •Does not even try to estimate any costs – •of any architectures –Does not know how to do so if asked –If they try to estimate they are at least 10x wrong •Does not even try to estimate the numeric impact on even the most critical architectural objectives •Does not even realize they need quantified performance and quality objectives to drive and justify architecture •They have no specific verifiable idea of the impact their ideas have on numeric quality and performance levels. •It is all ‘smoke and mirrors’ •They take no responsibility for the performance and quality attributes or costs of their suggested architecture: no skin in the game. • Presented Javazone Oslo Sept 2011 © Gilb.com Screen Shot 2011-09-07 at 15.13.13.png Drafted 20 aug 2011 tom gilb http://www.google.com/imgres?imgurl=http://notallbits.files.wordpress.com/2011/02/alfred.jpg&imgref url=http://notallbits.wordpress.com/category/fun-stuff/&h=363&w=297&sz=21&tbnid=S1CB5lMojwQXmM:&tbn h=90&tbnw=74&prev=/search%3Fq%3Dwhat%2Bme%2Bworry%2Bposter%26tbm%3Disch%26tbo%3Du&zoom=1&q=what+me+ worry+poster&docid=DHGwB40Gm0SrNM&sa=X&ei=Q25nTpGTF-b04QSs6cWnDA&ved=0CFMQ9QEwBQ&dur=360 Dan Messer , Not All Bits site = What Me Worry Multiple Required Performance and Cost Attributes are the basis for architecture selection and evaluation Sept 12 2002 added to slides , drawn by Lindsey Brodie for CE book Planguage Glossary (full glossary 650+ concepts download at www.gilb.com) http://www.gilb.com/tiki-download_file.php?fileId=387 – Architecture (collective noun): •Concept *192. May 9 2005 • •The ‘architecture’ is –the set of entities that in fact exist –and impact a set of system attributes –directly, or indirectly, by •constraining, •or influencing, –related engineering decisions. • Architecture (collective noun): Concept *192. August 27, 2002 The ‘architecture’ is the set of design artifacts which are selected to satisfy a set of system-and-stakeholder requirements, by constraining, or influencing, related engineering decisions. Older definition with the term selected Requirement •is a •stakeholder-valued system state, •under stated conditions. • •Concept *026 (Planguage Glossary, 2012) •http://www.gilb.com/tiki-download_file.php?fileId=386 • • Impact Estimation Basic Concepts Source: Lindsey Brodie, Editor of Competitive Engineering May 2000 Presented Javazone Oslo Sept 2011 © Gilb.com The candidates Impact Estimation: How much do designs impact all critical cost and quality attributes? Function Component Performance ? Design Idea A Design Idea B A B A B A B A B A B A B A B A B B A B A ? Costs The Estimation of impact. •Figure 1: Real (NON-CONFIDENTIAL version) example of an initial draft of setting the objectives that engineering processes must meet. Business Objectives Quantified Strategy Impact Estimation Cost Figure 2: an ‘Impact Estimation Table’ (see ‘CE’ book) A set of 12 proposed Engineering Processes, with about $100,000,000 in total investment projected over time, are evaluated theoretically for their impact on 13 business objectives (defined in Fig. 1 above). THE PRINCIPLE OF 'QUALITY QUANTIFICATION' •All qualities can be expressed quantitatively, • 'qualitative' does not mean unmeasurable. "In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.” Lord Kelvin, 1893 from http://zapatopi.net/kelvin/quotes.html Thank you for the lecture you gave at OGI. It gave me much food for thought and action, particularly in the realm of requirements specification. Much to learn, much to apply. As for your Lord Kelvin quote, it prompted an immediate google search. I found a fuller version of the quote at http://zapatopi.net/kelvin/quotes.html, which reads, "In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." While his full quote is more narrow in scope (physical science) I think your extension of the idea to "requirements science" arena equally applicable. Benjamin Ward ---Benjamin Ward----- Work: 503.232.7053 / mailto:benjamin.ward@tek.com Home: 503.232.7053 / mailto:ben_ward@northwest.com ------------------------- June 1 contribution Michael Jackson, London George Miller, the psychologist > usually credited with the magic number seven, plus or minus > two, wrote: "In truth, a good case could be made that if > your knowledge is meagre and unsatisfactory, the last thing > in the world you should do is make measurements. The chance > is negligible that you will measure the right thing > accidentally." Meagre and unsatisfactory knowledge about > separability can't be improved by measuring anything, or •even by thinking about what you might measure. •=========== end MJ quote Value Management (Evo) with Scrum development •developing a large web portal www.bring.no dk/se/nl/co.uk/com/ee at Posten Norge 102 Copyright: Kai@Gilb.com Slides download: http://bit.ly/BringCase We have a challenge ... Do you know what the problem is? is it? tools? management? would you like Management to embrace Agile? Listen up! Why difficult for managers? Is there anything, in say Scrum, that even tries to address the promise of results to stakeholders, business results, stakeholder value results. How much! of what values! To whom! to what costs? We have a problem Agileists! We have a problem Scrum Masters! We have a problem Product Owners! Do you know what the problem is? What is your main problem with Agile development? Is your main problem with development? Probably not! Is your main problem with tools? Is your main problem with higher level management? Yes? Do they understand what we are doing? Would you like Management to understand Agile? Would you like them to support you more? Would you like them to participate from their side? Would you like management to embrace Agile development? Listen up! Why is it difficult for Management to embrace agile? Why is it difficult for Management to embrace Agile? Is Agile difficult to understand? No it has simple prescriptive recipes like Scrum. Management cant get a grip on, and I would say rightly so; How they can manage the end results, that development, agile or not, is there to give them. The value benefits it will give their customers, their stakeholders What it will cost, when it will be done. Is there anything, in say Scrum, that even tries to address the promise of results to stakeholders, business results, stakeholder value results. How much! of what values! To whom! to what costs? If you can not communicate that to a manager, you don't deserve his ears! Do you deserve his ears? deliver value to stakeholders, within agreeable resources. no external Value delivery? not even a thought about Stakeholders? It is all about YOU “You, the developer, have become the center of the universe!” <- Scott Ambler Scot Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Working software is the primary measure of progress. Scrum 107 Copyright: Kai@Gilb.com Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Good luck !!! deliver value to stakeholders, within agreeable resources. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Should we not try to understand and define what our stakeholders value? And set out to deliver that! Well that is what we do at Bring and that is what we always do with Value Management and I cant wait to tell you how you can do that! history •Posten Norge AS bought a series of companies –within Logistics, Package transport, CRM and Storage –in Norway, Sweden, Denmark, Finland, UK, Holland and Estonia. 109 Copyright: Kai@Gilb.com Some Players 112 Copyright: Kai@Gilb.com Posten Webteam - Value Management Certified Project Owner: Anne Hognestad anne.hognestad@posten.no Product Owner: Terje Berget terje.berget@posten.no Lin Smitt-Amundsen & Kristin Nygård Many Business Groups and internal stakeholders. Kjetil Halvorsen kjetil.halvorsen@posten.no Bekk & Ergo Group Scrum Master: Fredrik Bach fredrik.bach@bekk.no Technical Architect: Stefan M. Landrø: stefan.landro@bekk.no Graphics: Espen Satver Morten Wille Johannessen, Markus Krüger, Dag Stepanenko NetLife Research User Experience: Gjermund Also gjermund@netliferesearch.com Kjell-Morten Bratsberg Thorsen Kai Gilb: Management Coach: Kai 113 Copyright: Kai@Gilb.com Copyright: Kai@Gilb.com Stakeholders Values Measure Learn Value Management Process 114 Solutions Decompose Develop Deliver Value Management 115 Copyright: Kai@Gilb.com Verdi styrings teknikker inn og ut av scrum Value Management 116 Copyright: Kai@Gilb.com Management Developers Developers Management Value Management 117 Copyright: Kai@Gilb.com 118 Business Goals Stakeholder Value 1 Stakeholder Value 2 Business Value 1 -10% 40% Business Value 2 50% 10% Resources 20% 10% Stakeholder Val. Product Value 1 Product Value 2 Stakeholder Value 1 -10% 50 % Stakeholder Value 2 10 % 10% Resources 2 % 5 % Product Values Solution 1 Solution 2 Product Value 1 -10% 40% Product Value 2 50% 80 % Resources 1 % 2 % Prioritized List 1. Solution 2 2. Solution 9 3. Solution 7 We measure improvements Learn and Repeat Prioritized List 1. Solution 2 2. Solution 9 3. Solution 7 Copyright: Kai@Gilb.com Value Decision Tables Scrum Develops 119 Wargame Value Decision Table 119 Copyright: Kai@Gilb.com Find.Fast Sorted.Needs Service Guide Resources. External Resources. Internal 120 “Our challenge is to measure in practice” •Anne Hognestad Project Owner: anne.hognestad@posten.no Copyright: Kai@Gilb.com ”Utfordringen vår er å få til måling” 121 Measurements: Establishing Past Levels Past [March. 2008] ?? sec. Scale: Average time, in seconds, a User with def. [User-Experience, default=Normal] uses to find what they and we want them to find. Copyright: Kai@Gilb.com Kvantifiseringsskala: Gjennomsnittlig tid, i sekunder, en Bruker med def. [Bruker-Ekspertise, standardverdi Normal] bruker for å finne det de og vi ønsker at de skal finne. Use Cases: These were used to measure the effectiveness of different solution alternatives 1. Send a contract to another company in Oslo. It has to be delivered within two hours Correct: (Express – Budservice) 2. Send five books to an office in Trondheim. The time it takes is not critical. Correct: (Logistics – Bedriftspakke Dør-til-Dør) 3. You are selling sofas. You store them in Kolbotn and ship them to customers accross the country. Find a service to deliver the sofas from your wearhouse to your customers home. Correct: (Logistics – Hjemlevering, Nasjonalt gods) 4. You have a container stocked with bicycles that you are going to ship to South-Africa. Find a product/service that will do this for you. Correct: (Logistics – FCL, Full Containerlast) 5. You are expecting a shipment of frozen vegetables. Find a service to store them for 2-3 months. Correct: (Frigoscandia – Fryselagring, Lagertjenester) 6. You want to send advertising to children families in Tvedestrand and want to add addresses that you do not have in your customer database. Correct: (Dialogue – Målgrupper og adresser) 7. You are tasked by your company to find the most profitable way for them to send mail. Your company normaly sends about 500 to 600 letters a month. Correct: (Mail – Fleksipost) 8. You have already sent out an offer to a list of potential customers, and you now want to send to the customers that have not responded, an followup offer. Find the service.Correct: (CityMail – Effekt och oppföljning) 122 Copyright: Kai@Gilb.com Use Cases: 1. Du skal sende en kontrakt til et annet firma i Oslo. Den må være framme innen to timer. Correct: (Express – Budservice) 2. Du skal sende fem bøker til et kontor i Trondheim. Det er ikke så farlig hvor fort det går. Correct: (Logistics – Bedriftspakke Dør-til-Dør) 3. Du selger sofaer. Du har et lager på Kolbotn og sender til forskjellige kunder over hele landet. Kan du finne et produktet/tjeneste for å levere sofaer fra lageret og hjem til kunden? Correct: (Logistics – Hjemlevering, Nasjonalt gods) 4. Du har en container med sykler som skal sendes til Sør-Afrika. Finn et produkt/tjeneste som gjør deg i stand til dette. Correct: (Logistics – FCL, Full Containerlast) 5. Du venter et parti frosne grønnsaker, som du skal lagre i 2–3 måneder. Correct: (Frigoscandia – Fryselagring, Lagertjenester) 6. Du skal sende reklame til barnefamilier i Tvedestrand og ønsker adresser du ikke allerede har i kunderegisteret ditt. Correct: (Dialogue – Målgrupper og adresser) 7. Du skal finne den mest lønnsomme måten å sende post på, for din bedrift. Dere sender vanligvis 500–600 brev i måneden. Correct: (Mail – Fleksipost) 8. Du har sendt tilbud til en rekke potensielle kunder, og ønsker nå å sende ut en oppfølging til de som ikke har svart. Correct: (CityMail – Effekt och oppföljning) Penalty Time: a device for getting a more realistic measure of customer success in finding our services Wrong Service: The service the user chose would NOT do the task. +300 seconds. Suboptimal Service: The service the user chose could do the task, but it is not the optimal service. +30-120 seconds 123 Copyright: Kai@Gilb.com a device for getting a more realistic measure of customer success in finding our services Added by tom 24 mar 2013 for dhl 124 Copyright: Kai@Gilb.com 197 seconds Result data from testing 5 users on Find.Fast 125 Measurements: Establishing Past Levels Past [March 2008] Scale: Average time, in seconds, a User with def. [User-Experience, default=Normal] uses to find what they and we want them to find. Copyright: Kai@Gilb.com 197 seconds Skala: Gjennomsnittlig tid, i sekunder, en Bruker med def. [Bruker-Ekspertise, standardverdi Normal] bruker for å finne det de og vi ønsker at de skal finne. 126 Measurements: Establishing Status Levels Scale: Average time, in seconds, a User with def. [User-Experience, default=Normal] uses to find what they and we want them to find. Copyright: Kai@Gilb.com Status [May. 2009] ??? sec. 197 seconds Past [March 2008] -?? sec. Skala: Gjennomsnittlig tid, i sekunder, en Bruker med def. [Bruker-Ekspertise, standardverdi Normal] bruker for å finne det de og vi ønsker at de skal finne. 127 Measurements: Establishing Status Levels Scale: Average time, in seconds, a User with def. [User-Experience, default=Normal] uses to find what they and we want them to find. Copyright: Kai@Gilb.com Status [May. 2009] 148 sec. 197 seconds Past [March 2008] -49 sec. Skala: Gjennomsnittlig tid, i sekunder, en Bruker med def. [Bruker-Ekspertise, standardverdi Normal] bruker for å finne det de og vi ønsker at de skal finne. 128 Copyright: Kai@Gilb.com > Stakeholders Values Solutions Decompose Develop Deliver Measure Learn Learn & Change Learning is defined as a change in behavior. 129 Copyright: Kai@Gilb.com 130 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 40% Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 131 Value Decision Table Copyright: Kai@Gilb.com Find.Fast Sorted.Needs Service Guide Resources. External Resources. Internal Product Values Solutions 132 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 133 Copyright: Kai@Gilb.com Find.Fast Stakeholder Values Product Values Stakeholder Values Stakeholder Value Examples KFS.Charging Scale: number of customers per month that charge their postage meter “frankeringsmaskin” on www.Bring.no/MailCustomerservice.Contact Scale: % of customers that get the correct answer on their question, the first time they contact Customerservice. 135 Copyright: Kai@Gilb.com Stakeholder Values Stakeholder Value Examples Sales:Order.Number Scale: number of completed sales per month, from Self.Help.Solutions. Sales.Leadsgeneration Scale: number of Electronic-Leads per month generated on bring.xx to the Specialists. SMB.Selfservice Scale: % SMB customers tht use self service solutions rather than other channels. 136 Copyright: Kai@Gilb.com Stakeholder Values Salg.bestilling.Antall Skala: Antall gjennomførte salg, pr. mnd., som gir omsetning fra Salg.Leadsgenerering Skala: Antall Elektroniske-Leads pr. mnd generert på konsernportalen til spesialistene. SMB.Selvbetjening Skala: % SMB kunder som bruker selvbetjeningsløsninger fremfor andre kanaler. 137 Copyright: Kai@Gilb.com Stakeholder Values > 138 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 139 Value Decision Table Copyright: Kai@Gilb.com Business Values Stakeholder Values 140 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 141 Copyright: Kai@Gilb.com 142 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 40% Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 143 Value Decision Table Copyright: Kai@Gilb.com Find.Fast Sorted.Needs Service Guide Resources. External Resources. Internal Product Values Solutions 144 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 145 Copyright: Kai@Gilb.com Find.Fast Stakeholder Values Product Values Stakeholder Values Stakeholder Value Examples KFS.Charging Scale: number of customers per month that charge their postage meter “frankeringsmaskin” on www.Bring.no/MailCustomerservice.Contact Scale: % of customers that get the correct answer on their question, the first time they contact Customerservice. 147 Copyright: Kai@Gilb.com Stakeholder Values Stakeholder Value Examples Sales:Order.Number Scale: number of completed sales per month, from Self.Help.Solutions. Sales.Leadsgeneration Scale: number of Electronic-Leads per month generated on bring.xx to the Specialists. SMB.Selfservice Scale: % SMB customers tht use self service solutions rather than other channels. 148 Copyright: Kai@Gilb.com Stakeholder Values Salg.bestilling.Antall Skala: Antall gjennomførte salg, pr. mnd., som gir omsetning fra Salg.Leadsgenerering Skala: Antall Elektroniske-Leads pr. mnd generert på konsernportalen til spesialistene. SMB.Selvbetjening Skala: % SMB kunder som bruker selvbetjeningsløsninger fremfor andre kanaler. 149 Copyright: Kai@Gilb.com Stakeholder Values > 150 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables 151 Value Decision Table Copyright: Kai@Gilb.com Business Values Stakeholder Values 152 Copyright: Kai@Gilb.com Business Goals Training Costs User Productivity Profit -10% 40% Market Share 50% 10% Resources 20% 10% Stakeholder Val. Intuitiveness Find.Fast Training Costs -10% 50 % User Productivity 10 % 10% Resources 2 % 5 % Product Values GUI Style Rex Service Guide Find.Fast -10% 35 % Performance 50% 80 % Resources 1 % 2 % Scrum Develop We measure improvements Learn and Repeat Prioritized List 1. Service Guide 2. Solution 9 3. Solution 7 Value Decision Tables Project Management Business Owners Developers Steering Committee Push Technical Solutions Wants to make decisions about Technical Solutions Thinks and understands Technical Solutions 153 Project Management Business Owners Developers Steering Committee What are your real needs? Sign off on Value Improvements What technical solution will give maximum Product Value improvements? 154 155 “Our challenge is to, in practice, make payments based on value delivery.” •Anne Hognestad Project Owner: anne.hognestad@posten.no 155 Copyright: Kai@Gilb.com the road ahead ... “Utfordringen vår er å få til målemetode opp mot betaling.” •Posten –Webteam - Value Management Certified •Project Owner: Anne Hognestad anne.hognestad@posten.no •Product Owner: Terje Berget terje.berget@posten.no •Lin Smitt-Amundsen & Kristin Nygård –Many Business Groups and internal stakeholders. –Kjetil Halvorsen kjetil.halvorsen@posten.no •Bekk & Ergo Group –Scrum Master: Fredrik Bach fredrik.bach@bekk.no –Technical Architect: Stefan M. Landrø: stefan.landro@bekk.no –Graphics: Espen Satver –Morten Wille Johannessen, Markus Krüger, Dag Stepanenko •NetLife Research –User Experience: Gjermund Also gjermund@netliferesearch.com Kjell-Morten Bratsberg Thorsen •Kai Gilb: Management Coach: Kai The Team 156 Copyright: Kai@Gilb.com @kaigilb To download this presentation You will find it here: http://www.gilb.com/FileGalleries Direct link: http://bit.ly/BringCase End of Bring Case 157 total hours to complete job Decided not to plan, but to do immediately 4 hours faster before first meeting with the steering committee. 158 Free Digital Book on Quality Quantification •REQUEST “BOOK” in subject from – TOM @ GILB .com •Tom Gilb, –Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage – and I will also send links to related papers on requirements and estimation. compengFrontCov28KB • • • • I think this is as far as I can get in my 45 minutes •But here is additional material Simon Ramo (tRw) “No matter how complex the situation, good systems engineering involves putting value measurements on the important parameters of desired goals and performance of pertinent data, and of the specifications of the people and equipment and other components of the system. It is not easy to do this and so, very often, we are inclined to assume that it is not possible to do it to advantage. But skilled systems engineers can change evaluations and comparisons of alternative approaches from purely speculative to highly meaningful. If some critical aspect is not known, the systems experts seek to make it known. They go dig up the facts. If doing so is very tough, such as setting down the public’s degree of acceptance among various candidate solutions, then perhaps the public can be polled. If that is not practical for the specific issue, then at least an attempt can be made to judge the impact of being wrong in assuming the public preference. Everything that is clear is used with clarity: what is not clear is used with clarity as to the estimates and assumptions made, with the possible negative consequences of the assumptions weighed and integrated. We do not have to work in the dark, now that we have professional systems analysis. Ramo98 page 81 Simon Ramo and Robin K. St.Clair, The Systems Approach: Fresh Solutions to Complex Civil Problems Through Combining Science and Practical Common Sense, 1998, 150pp, Ó TRW, Inc., Manufactured in USA, KNI Incorporated, Anaheim CA. Free copy at TRW Stand at INCOSE conference 2002. Ramo98: Simon Ramo and Robin K. St.Clair, The Systems Approach: Fresh Solutions to Complex Civil Problems Through Combining Science and Practical Common Sense, 1998, 150pp,  TRW, Inc., Manufactured in USA, KNI Incorporated, Anaheim CA. Free copy at TRW Stand at INCOSE conference 2002. Slide File =Ramo Quote Quantification in Courses/Requirements/new slides to add 28 Aug 02 Tom@Gilb.com How to Quantify Quality Use known quantification ideas Modify known quantification ideas to suit your current problems Use your common sense and powers of observation to work out new measures Learn early, learn often, adjust early definitions Plan Do Study Act Define Constraints (Fail) and targets (Goal, Wish). Fail[next year] +0% <-not worse Goal +5 years, ….] +30%<-TG Wish [2007,…] +50%<-Marketing Define benchmarks. Past [2003] +50% <-intuitive Record [2002, ….] 0% Trend [2007,…] -30% ‘Environmentally Friendly’ Quantification Example Give the quality a stable name tag Environmentally Friendly Define approximately the target level Ambition Level: A high degree of protection ……. Define a scale of measure: Scale: % change in environment Decide a way to measure in practice. Meter: {scientific data…} Devices to help quantify quality ideas: Standard Hierarchy of Concepts from Gilb: Principles of Software Engineering Management. QUALITY USABILITY WORK- CAPACITY ADAPT- ABILITY AVAIL-- ABILITY MAINTAINABILITY RELIABILITY 1. PROBLEM RECOGNITION 6. QUALITY CONTROL 2. ADMINISTRATIVE DELAY 7. DO THE CHANGE 3. TOOLS COLLECTION 8. TEST THE CHANGE 4. PROBLEM ANALYSIS 9. RECOVER FROM FAULT 5. CHANGE SPECIFICATION Using ‘Parameters’ when defining a Scale of Measure •Using [qualifiers] in the SCALE definition –gives flexibility of detailed specification later. •Example –SCALE: the % of •defined [Users] • using defined [system Components] •who can successfully accomplish defined [Tasks] Goal [ Users = NOVICES, Components = USER MANUAL, Tasks = ERROR CORRECTION ] 60% [Scale Parameters] Quality Quantification Process (full detail ‘Competitive Engineering’, Scales chapter, & slide here later ‘QQ’) E1. Do not enter if you can reuse existing standards. E2.Do not enter if your source documents are poor. P1. Use applicable rules (GR, QR, QQ). P2. Build list of quality ideas needing control. P3. Detail qualities by exploding hierarchically. - use evolutionary or pilot feedback. P4. Revise your draft based on design work. P5. Quality Control the specification. P6. Get experience and then revise specifications. Entry Procedure X1. Don’t exit if calculated remaining defects are more than one per page. X2. Unless you intentionally do so to learn more from experience. Exit General Hatmanship: GIST: improve ability to have hats on head and nearby Hatmanship On Head: SCALE: hats on top of persons head PAST [Me, This year] 10 <- Guess RECORD [2003, UK] 15 <- GB Record WISH [Guinness Record, April] 20 <- Tom Hatmanship Nearby: SCALE: hats not on head, but on, or near, body;within 10 meter radius. Past…. Goal……..etc. A ‘Quality Quantification’ Principle •0. THE PRINCIPLE OF •'BAD NUMBERS BEAT GOOD WORDS' •Poor quantification is more useful than none; at least it can be improved systematically. He had a lot of hats. He wants to be best in hatmanship. Scale: hats on his head. Past:3 Goal: 13 Quantify for realistic judgements •“To leave [soft considerations] out of the analysis –simply because they are not readily quantifiable –or to avoid introducing “personal judgments,” – clearly biases decisions against investments • that are likely to have a significant impact on considerations – as the quality of one’s product, delivery speed and reliability, and the rapidity with which new products can be introduced” • ß R. H. Hayes et al “Dynamic Manufacturing”, p. 77 in MINTZBERG94: page124 Principles for Quality Quantification. > •Some hopefully deep and useful guidelines •to help you quantify quality ideas 0. THE PRINCIPLE OF 'BAD NUMBERS BEAT GOOD WORDS’ (re-visited!) •Poor quantification is more useful than none; •at least it can be improved systematically. State of the Art Flexibility Enhanced Usability Improved Performance Not Clear! 1. THE PRINCIPLE OF 'QUALITY QUANTIFICATION' •All qualities can be expressed quantitatively, • 'qualitative' does not mean unmeasurable. “If you think you know something about a subject, try to put a number on it. If you can, then maybe you know something about the subject. If you cannot then perhaps you should admit to yourself that your knowledge is of a meager and unsatisfactory kind. Lord Kelvin, 1893 2. THE PRINCIPLE OF 'MANY SPLENDORED THINGS' •Most quality ideas –are usefully broken into several measures of goodness. Usability: Entry Qualification: Scale IQ, ……. Learning Effort: Scale: Hours to learn, ….. Productivity: Scale: Tasks per hour,……. Error Rate: Faults per 100 tasks, ….. Like-ability: % Users who like the system, …. Quantifying Usability (Real C&C System) QUALITY USABILITY WORK-CAPACITY ADAPTABILITY AVAILABILITY INTUITIVENESS INTELLIGIBILITY Intuitiveness GIST: Great intuitive capability SCALE: Probability that intuitive guess right. METER: <100 observations.> PAST [GRAPES] 80% <-LN RECORD [MAC] 9%?<-TG Fail [TRAINED, RARE] 50-90% Goal [TASKS] 99% <-LN Intelligibility GIST: Super ease of immediate understanding SCALE:% OK interpretations. METER: 10 ops., 100 infos, 15 mins. P:PAST[20 ops., 300 info, 30 min.]99% RECORD [P] 99.0% Fail [DELIVERY[1]]99.0%<-MAB [ACCEPTANCE] 99.5% Goal [M1] 99.9% <-LN AND MORE! TRAINED: DEFINED:C&Ctl. operator, approved course, 200 hours duration. RARE: DEFINED: types of tasks performed less than once a week per op. TASKS: DEFINED: onboard operator distinct tasks carried out. ACCEPTANCE: DEFINED: formal acceptance testing via customer contract. DELIVERY: DEFINED: Evolutionary delivery cycle, integrated and useful. Multiple Required Performance and Cost Attributes are the basis for architecture selection and evaluation Sept 12 2002 added to slides , drawn by Lindsey Brodie for CE book 3. THE PRINCIPLE OF 'SCALAR DEFINITION' •A Scale of measure is a powerful practical definition of a quality Flexibility: Scale: Speed of Conversion to New Computer Platform (Quality) Requirements Specification Template with HOW WE SPECIFY SCALAR ATTRIBUTE PRIORITY • •Ambition: •Version: •Owner: •Type: •Stakeholder: { , , } “who can influence your profit, success or failure?” •Scale: •Meter [ ] •====Benchmarks ============= the Past •Past [ ] <-- •Record [ , , ] <-- •Trend [ , ] <-- •===== Targets ============= the future needs •Wish [ ] <-- •Goal […] <-- Source • Value [Goal] •Stretch [ ] <-- •========== Constraints ======================== •Fail [ ] <-- ‘Failure Point’ •Survival [ ] <- ‘Survival Point’ 4. THE PRINCIPLE OF 'THREATS ARE MEASURABLE' •If lack of quality can destroy your project •then you can measure it sometime; •the only discussion will be 'how early?'. 5. THE PRINCIPLE OF 'LIMITS TO DETAIL' •There is a practical limit to the number of facets of quality you can define and control, •which is far less than the number of facets that you can imagine might be relevant. 6. THE PRINCIPLE OF 'METERS MATTER' •Practical measuring instruments •improve •the practical understanding •and application of ‘Scales of measure’. Portability: Scale: Cost to convert/Module Meter [Data] measure/1,000 words converted Meter [Logic] measure/1,000 Function Points Converted 7. THE PRINCIPLE OF 'HORSES FOR COURSES' •Different quality-Scale measuring processes • will be necessary •for different points in time, different events and different places. Availability: Scale: % Uptime for System Meter [USA, 2001] Test X Meter [UK, 2002] Test Y 8. THE PRINCIPLE OF 'BENCHMARKS' •Past history and future trends help define words like "improve" and "reduce". Reliability Scale: Mean Time To Failure Past [US DoD, 2002] 30,000 Hours Trend [Nato Allies, 2003] 50,000 Hours Goal [UK MOD, 2005] 60,000 Hours 9. THE PRINCIPLE OF 'NUMERIC FUTURE' •Numeric future requirement levels complete the quality definition of relative terms like 'improved'. Usability: Scale: Time to learn average task. Past [Old product, 2003] 20 minutes Wish [New product, 2007] 1 minute Stretch [End 2008, Students] 2 minutes Goal [End 2005, Teachers] 5 minutes Some Planguage ‘Quality Quantification’ Concepts ? ? PAST: any useful reference point. Your old product, a competitors organization, a quality achieved in same discipline but different branch of business. RECORD: best in some class, state of the art. Something to beat. A challenge for you. An extreme PAST. TREND: a future guess based on the PAST. Survival : a level needed for survival of the entire system. Goal: the level needed for satisfaction, happiness, joy and 100% full payment! Wish: a level desired by someone, but which might not be feasible. Project is not committed to it. [-----] A Corporate Quality Policy (Euro Multinational) Quality Policy 1. QUANTIFY QUALITY 2. CONTROL MULTIPLE DIMENSIONS 3. EVALUATE RISK 4. CONFIGURATION MANAGEMENT - TRACEABILITY 5. DOCUMENT QUALITY EVALUATION 6. EVOLUTIONARY DELIVERY CONTROL 7. CONTINUOUS WORK PROCESS IMPROVEMENT Policy on QUANTIFICATION, CLARIFICATION AND TESTABILITY OF CRITICAL OBJECTIVES: “All critical factors or objectives (quality, benefit, resource) for any activity (planning, engineering, management) shall be expressed clearly, measurably, testably and unambiguously at all stages of consideration, presentation, evaluation, construction and validation. “ <- (Quality Manual Source is) 5.2.2, 4.1.2, 4.1.5, 5.1.1, 6.1, 6.4.1, 7.1.1, 7.3 and many others. Einstein on Stretching •“One should not pursue goals that are easily achieved. •One must develop an instinct for what one can just barely achieve through one’s greatest efforts.” (1915) “We have to do the best we can. This is our sacred human responsibility” (1940) Source detail in notes section of this slide. (Calaprice, 2000) Source [One should…] to former student Dallenbach, May 31 1915, while giving him some advice on an electrical engineering project. Source: CPAE, Vol. 8, Doc. 87, in Alice Calaprice (Ed.), The Expanded Quotable Einstein, Princeton, 2000, page 233. Source [We have to…] From a conversation recorded by Algernon Black, fall 1940, Einstein Archive 54-834 in Alice Calaprice (Ed.), The Expanded Quotable Einstein., Princeton, 2000, page 119. Free Digital Book on Quality Quantification •REQUEST “BOOK” in subject from – TOM @ GILB .com •Tom Gilb, –Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage – and I will also send links to related papers on requirements and estimation. compengFrontCov28KB LAST SLIDE SEE WWW.Gilb.COM FOR MORE DETAIL “Competitive Engineering” at www.gilb.com (or via memory stick here at conference from presenter): Supporting Standards for Quality Quantification These following slides contain supporting Standards in detail which I do not expect to have time to show in my lecture A Process for Quality Quantification. (PROCESS.QQ) ENTRY: (ENTRY.QQ) •1. Do not enter if company files or standards already have adequate quantification devices. –Use existing quantification SCALES and METERS preferably. •2. Enter only if your process input documents –(contracts, marketing plans, product plans, requirements specification for example) –are Quality Controlled, – and have exited at a known and acceptable standard of defect-freeness •(default standard; less than 1Major defect/page estimated remaining). Procedure for the Quality Quantification Task (PROCEDURE.QQ) •NOTE: these following steps cannot be simply sequentially. They need to be repeated many times to evolve realistic quality quantifications. 1. Use applicable rules {RULES.GR, RULES.QR, RULES.QQ} •2. Build a list of all quality concerns from your process input documents. Include implicit quality requirements derived from design requirements. Include any recent practical experience such as from evolutionary steps ( of this project, pilot experiences or prototypes. •3. Detail the specification to a useful level. Include any recent practical experience such as from evolutionary result delivery steps of this project. •4. Revise these specifications when some design engineering/planning work is done on their basis. Only through design work can you know about the available technology and its costs. •5. Perform Quality Control (Inspection method) calculating remaining Major defects per page for the exit control. Apply valid rules {RULES.GR, RULES.QR, RULES.QQ} •6. Get experience using these specifications and revise specifications to be more realistic. •7. Repeat this process until you are satisfied with the result. •8. Cumulate your improved idea experiences and make available to others. EXIT: (EXIT.QQ) •1. Calculated remaining Major defects/page less than 1. •2. or exit condition “1.” above is waived • with the intent of getting experience or opinions • so as to refine it • for official exit and more-serious use. Specific Rules for Quality Quantification (QQ) •4.3. Rules: Quality Quantification. (RULES.QQ) • •The following rules would be –appropriate for a culture which was intent on raising quality specifications to a high level –and to systematically learn as a group, –in the long term, –from the experiences of themselves and others. •The rules are guidance to the any writer or maintainer of quality specifications. •Violations of these rules would be classed as 'defects' in a quality control process on the document. Rules for Quality Quantification:(RULES.QQ) 1of2 • 0:RULES: Rules for technical specification (RULES.GR) apply. This may be used in addition to the Quality Requirement Specification Rules (RULES.QR) or whenever serious emphasis on quality definition is required. 1:STANDARD: The Scale shall wherever possible be derived from a standard SCALE (in named files or referenced sources) and the standard shall be source referenced (ß) in the specification. 2:SCALENOTE: If the Scale is not standard, a notification to Scale owner will inform about this case. "Note sent to " will be included as comment to confirm this act. 3:RICH: Where appropriate, a quality concept will be specified with the aid of multiple Scale definitions, each with their own unique tag, and appropriate set of defining parameters. 4: Meter : a practical and economic Meter or set of Meter s will be specified for each Scale. Preference will be given to previously defined Meter s in our Quantification archives. 5: Meter. NOTE: When 'essentially new' (no reference to previous case in generic archives) Meter specifications are made a Notification to Meter owner will notify about this case. "Note sent to " will be included as comment. Continued next slide Rules for Quality Quantification:(RULES.QQ) 2of2 6:BENCHMARK: Reasonable attempt to establish 'baselines' (Past, Record, Trend) will be made for our system's past, and for relevant competition. 7:TERMS: Future-priority requirements (Fail, Goal) will be made with regard to both long and short term. 8:DIFFERENTIATE: A distinction will be made, using qualifiers, between those system components which must have significantly higher quality levels than others, and components which do not require such levels. "The best can cost too much". 9:SOURCE: Emphasis will be placed on giving the exact and detailed source (even if a personal guess) of all numeric specifications, and of any other specification which is derived from a process input document (like a Meter which is contractually defined). 10:UNCERTAINTY) Whenever numbers are uncertain, we will have rich annotation about the degree (plus/minus) and reason (a comment like "because contract & supplier not determined yet"). The reader shall not be left to guess or remember what is known, or could be known, with reasonable inquiry by the author. Generic Rules for Technical Specification (including Quality Quantification) GR • 0.3. Rules/Forms/Standards: Generic Rules and Requirements Rules sample. •Here are some formal rules which could serve as a standard for how to communicate such ideas. •We call this standard ‘Generic‘ because it applies to many types of specification. •‘Rules’ are a ‘best practice‘ procedure for writing a document. Violation of rules constitutes a formal ‘defect‘ in that document. •Rules are the local law of practice, and violation of them is an 'illegal' act. GENERIC RULES FOR TECHNICAL AND MANAGEMENT DOCUMENTATION Tag: RULES.GR •1:CLEAR Statements should be clear and unambiguous to their intended reader. 2:SIMPLE: Statements should be written in their most elementary form. 3:TAG. Statements shall have a unique identification tag. 4:SOURCE: Statements shall contain information about their detailed source, AUTHORITY and REASON/Rationale. 5:GIST: Complex statements should be summarized by a GIST statement. 6:QUALIFY: When any statement depends on a specific time, place or event being in force then this shall be specified by means of the [qualifier square brackets]. 7:FUZZY: When any element of a statement is unclear then it shall be marked, for later clarification, by the . 8: COMMENT: any text which is secondary to a specification, and where no defect could result in a costly problem later, shall be written in italic text statements, or/and headed by suitable warning (NOTE, RATIONALE, COMMENT) or moved to footnotes. Non-commentary specification shall be in plain text Italic can be used for emphasis of single terms in non-commentary statements. Readers shall be able to visually distinguish critical from not critical specification. 9: UNIQUE: requirements and design specifications shall be made one single time only. Then they shall be re-used by cross reference to their identity tag. Duplication is strongly discouraged. In addition to the general rules, we can specify some special rules for the specific types of statement we are dealing with. For example SR (below), QQ (above), QR (above). REQUIREMENTS SPECIFICATION RULES. SPECIFIC RULES.SR •0:GR-BASE: The generic rules (RULES.GR) are assumed to be at the base of these rules. 1:TESTABLE: The requirement must be specified so that it is possible to define an unambiguous test to prove that it is later implemented. 2:METER: Any test of SCALE level, or proposed tests, may be specified after the parameter METER. 3:SCALE: Any requirement which is capable of numeric specification shall define a numeric scale fully and unambiguously, or reference such a definition. 4:MEET:The numeric level needed to meet requirements fully shall be specified in terms of one or more [qualifier defined] target level {PLAN, MUST, WISH} goals; mainly the PLAN level here. 5:FAIL: The minimum numeric levels to avoid system, political, or economic failure shall be specified in terms of one or more [qualifier defined] ‘MUST’ level goals. 6. QUALIFY. Rich use of [qualifiers] shall specify [when, where, special conditions]. Free Digital Book on Quality Quantification •REQUEST “BOOK” in subject from – TOM @ GILB .com •Tom Gilb, –Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage – and I will also send links to related papers on requirements and estimation. compengFrontCov28KB