When planning new IC design projects, such as SoCs or complex analog or RF chips, R&D organizations that have a firm grasp on the complexity of implementing the design wield a powerful competitive advantage. Complexity is a measure of engineering difficulty and provides the foundation for reliably estimating engineering resource requirements and development cycle time for projects, which is the essence of good project planning. Can anyone disagree that consistently reliable project plans, which means projects finish on-time and within budget, translate to higher revenue and profits? But how does one get an accurate, quantitative calculation of design complexity? [More]
Posts Tagged ‘ SOC ’
By Jeffrey Eversmann
After two years of doom and gloom, it’s refreshing to attend an industry event and hear talk of innovation—at all levels. That was the atmosphere at a recent GSA Silicon Series luncheon I attended in Austin, Texas, that featured a panel discussion on blurring technology lines.
At the application-segment level, Patrick Moorhead, marketing vice president with AMD, joked:
“I’ve been hearing that the desktop market is dying for the past 15 years.”
He made that quip after holding up the “4th screen” examples he had brought with him: an iPad and a Sony eBook reader. “Only 5-10% of consumers back up their data, so a fixed device will always be in the home,” Moorhead said.
I agree. While I like the professional security that a proliferation of leading-edge microprocessors brings, I am burdened by the yearly upgrade rotation I am now on to keep current the six-plus PCs in my home. All of us in the semiconductor industry have been through multiple iterations of the tablet device, some of them from Apple. As was often said by the panel, “it’s not an either-or these days.”
Fellow panelist Naveed Sherwani, CEO of Open-Silicon, Inc., added “the new form factor will succeed if it is useful.” So, panelists agreed that the iPad is not a desktop (or even laptop) killer. The question is: Will the average consumer add yet another device to the list of electronic gadgets we carry around each day?
The panel shifted to the technology level and wrestled with an intriguing question: Will ARM replace x86 in the desktop or will x86 replace ARM in the SoC market? While some in the audience checked email on their smartphones, Sandeep Shah, director of marketing and applications at Marvell Semiconductor, Inc., and Sherwani tackled the question.
Shah argued that an “ARM architecture licensee can bring together the best of both worlds.” (This is a very interesting perspective in light of Apple’s recent purchase of Intrinsity, which worked with Samsung to develop the ARM Cortex-based A4 processor.)
Shifting processor sands
Sherwani was quick to add that while there really hasn’t been an attempt by x86 to take over SoC design, that doesn’t mean an attempt isn’t brewing:
“In the next three years or so, things will get more competitive and more intense, when x86 is available for SoC development.”
Then it was time to move on to another much-discussed technology challenge, low power design. The panel members pulled out their different battery-powered devices and rattled off the actual vs. published battery life. “What we really need is more disclosure, a ‘truth-in-battery-life’ from silicon providers,” Moorhead said.
Shah, who probably lives power issues on a daily basis, talked about how the different Blackberry models used different chips from Marvell to get different power performance in the system. Marvell focuses on both system-level and gate-level approaches to power management. Sherwani wrapped things up from a design perspective saying “we have just scratched the surface on lower power design.” Maybe what we need is a Moore’s Law for low power design – something that will challenge engineers to do things that today are viewed as impossible.
All in all, the GSA luncheon was a great opportunity to re-connect with fellow semiconductor engineers. We exchanged cards with the same cell phone numbers, but with new company names, new titles, and new addresses. We talked about how tough things have been but how happy we are to be traveling less and spending more time with our families.
It felt like the calm before the innovation storm. I don’t know about you, but I’m here and getting ready for it.
We’re gearing up for DVCon (Feb. 22-25) in San Jose and not just because we’re participating in a panel. DVCon (on Twitter, @dvCon), which has emerged as a increasingly important event in recent years, features as keynoter Cadence CEO Lip-bu Tan. His topic gives a new voice to the mounting productivity crisis in semiconductor and system design.
According to an abstract of his talk:
“…the industry must approach the product development process much differently. The classic ‘brute force’ methods cannot scale to support the complexity of today’s SoCs and Systems. These traditional methods result in mounting costs and unpredictable schedules that are detrimental to profitability.”
- Cadence approaches the problem by giving engineers (among many other things) design exploration options that speed the implementation of the physical architecture of a chip.
- Numetrics approaches the problem by helping teams quantify the complexity of their design effort and build reliable project and staffing plans. This is crucial in an era where most IC projects slip schedule significantly.
Our vice president of professional services, Steve Gary, will speak on a panel just after Tan’s, titled “What Keeps You Up at Night?” It’s moderated by JL Gray from Verilabs, who writes the excellent Cool Verification blog; he’s posted a panel preview this week. Also in the conversation will be John Goodenough from ARM Ltd., Sheela Pillai of Advanced Micro Devices, Inc., Jim Crocker from Paradigm Works, Inc. and Victor Melamed from Ambarella.
There are plenty of things keeping the industry up at night, but I think we’ll hear a lot of excellent ways to overcome the sleeplessness and drive productivity—and the industry—to the next level. Hope to see you there.
Sometimes the simple questions are the most vexing. That hit me this week while participating in a DesignCon panel in Santa Clara, moderated by EDN Executive Editor Ron Wilson.
The title seemed easy enough: “Getting to Design Quality Closure Without Compromising Productivity.”
But really, what IS quality? How do we define it?
My fellow panelist, Camille Kokozaki, president of Design Rivers, quipped “It’s like pornography: you know it when you see it.”
Piyush Sancheti, senior director of business development at Atrenta, came close:
“Quality is meeting the design objectives you have: whether it’s area, power, timing functionality, or, in a broader sense, customer expectations. Productivity is getting there.”
Sancheti then added:
“Being able to measure it (productivity) with tools like Numetrics is important because you want to hit your objectives as fast and effectively as possible.”
Not surprisingly, our panel wrestled with one of the big issues in design quality today: verification. It deeply affects design quality and productivity. Sancheti noted that for some teams, 70 percent of the entire design development is spent on verification.
What I see first hand from customers is they struggle to understand how verification affects their productivity. Some program managers I talk to say:
“I understand the scope of logic design and physical implementation. Verification is an unknown for me. If I give the verification team another two months, they’ll take it, but how do I know that we’re better off?”
So, I think we’re seeing that verification needs to come up with some sort of model of completion so people can move on. And that’s not easy. Our data shows that some companies toggle up the tape-outs as part of a larger verification strategy, but that can hurt overall productivity.
How we fix verification is a broader issue. Do we lean on formal methods at the architectural level as opposed to time- and engineering-consuming test vectors?
For now, our role is to help teams quantify their design effort, properly staff their projects, and understand where they stand with respect to the industry’s best teams. From there they can make fact-based decisions to drive productivity improvements.
That’s our contribution to the broader challenges of verification and design quality, but as we all know, it takes a village (and many future industry panels) to come up with the solution.
(Jeff is Numetrics’ director of professional services and product marketing).
(Summary: As the semiconductor industry emerges from the recession, new ways of thinking are emerging as well to improve what’s becoming a new differentiator for companies: IC design development.)
- Worldwide third-quarter PC microprocessor unit shipments rose 23% compared to the second quarter, reaching a new all-time high, according to market research firm International Data Corp. (IDC).
- Chip-sales growth should be 10 percent in 2010 and 8.4 percent in 2011, according to the Semiconductor Industry Association. The decline in 2009 chip sales (down 11.6 percent is now less that earlier forecast).
- Individually, companies like Marvell, TSMC and ON Semiconductor are reporting encouraging results.
But, as they say, there’s good news and bad news. The good news is obvious. The bad news is more subtle: Companies are beginning to crank up the product-development dial significantly, and this can become a challenge for R&D organizations.
As a surge of new projects occurs, hiring generally is slow to catch up to demand. This puts stress on engineering organizations. Schedules are difficult to predict, and the engineers can get shifted from one product development team to another in the race to make deadlines. Managing a portfolio of products turns into a torch-juggling exercise—spectacular to watch but done with the knowledge that the risk is high.
This is a significant problem in the fables era—a time in which IC design development is an increasingly important source of differentiation for semiconductor companies. A sudden burst of product-development activity can bring R&D organizations to their knees.
Design development productivity is something to consider as we emerge from this recession. The stakes are high, and there’s little room for error in marshalling engineering resources to get products to market quickly.
All recessions force change on business, and this one is no exception. Old ways of doing things are being replaced by new thinking on productivity—all with an eye toward making “up and to the right” last.
(Summary: We inevitably get questions about Numetrics’ technology after webinars or live event presentations, and we’d like to share some of them in the spirit of helping you understand more about our products and solutions. Here are answers to several recent questions in the virtual mail bag).
Q: How do you define productivity?
A: We calculate complexity of the project and we divide the complexity units by total number of person weeks required to get that product out to volume production. That quotient gives you the productivity number. The typical range is 500 on the low end for a large team to 3000 for a small team.
There’s another measure, which is throughput, and throughput is complexity units per week. That’s a measure of normalized cycle team. Productivity is efficiency of the team and higher number is better.
Q: I’ve heard that in some sectors productivity decreases as team size increases. Is this true in semiconductor product development?
A: It’s a universal effect across pretty much any activity that has to do with building things. When you build larger teams, each person is doing a smaller and smaller slice of the overall work. More work has to be split apart and then put back together. Bigger teams equal more meetings and more management required. It’s universal and it’s inevitable. With the Numetrics approach, you can minimize this effect—decreasing productivity curve is flatter than it would otherwise be.
Q: It’s impossible to predict in a design project how many times customer requirements will change, when your EDA tools go buggy or if a key contributor leaves the team. So how do you quantify schedule risk with so many unpredictable variables?
A: The simple answer is our tools don’t predict things. You have a draw a line between statistical analysis and a crystal ball.
What Numetrics’ tools do is take your inputs of design parameters and measure them against the history of more than 1,500 design projects over eight generations of technology evolution (here’s a link to a demo of our tools). Using the data from those hundreds and hundreds of designs, this builds in realistic effort required to deal with those issues. It’s a way of contingency planning.
Think of it like yield modeling. You know that on each wafer a certain number of dice will fall out. Yield modeling doesn’t tell you which particle is going to hit which die and where. But they give you an accurate assessment of how your design will yield. Numetrics is like a yield model for project plans. It’s saying there’s a certain probability that if you’re going to try to achieve these targets, given what you’ve input you’re going to fail.
It allows you to make a quantitative assessments. It’s a probability model. It’s not a crystal ball.
Q: How does the complexity calculation model handle predictions for newer nodes, such as 45 and 32nm?
A: Numetrics’ IC Industry Database has collected information for eight technology generations. The technology shifts from one generation to another have been observed before. And what we’ve observed is that early users of technology nodes face considerably more complexity than later users of the same node, once the models and such are more stable. The equation has calibrated this effect which repeats from generation to generation. We’ve been able to model what the effect of the extra technology of a new node will be on a new design.
Q: Can your tools get data from existing sources or do I have to input it manually?
A: We’re dealing with milestones, staffing information and complexity information. Typically this information is copy-pasted from existing sources or customers are using XML import to get data into our tools.
(Alex is Numetrics’ director of professional services).
Underestimating the complexity of an SOC semiconductor design project is a growing problem in our industry. In an era where SOC projects cost tens of millions of dollars to complete, a week of schedule slip means $1 million or more in lost revenue potential. That’s unacceptable.
That was my main point last week during a panel I participated on that was part of the EE Times SOC Virtual Conference.
Former EE Times EDA Editor Richard Goering, now blogging for Cadence, captured the panel well in a post this week (Are SoC Development Costs Significantly Underestimated?).
To justify the investment in an SoC, Collett said, the available revenue stream must be 10X the development costs. Thus, if an SoC has a $500 million market opportunity, development costs should not exceed $50 million. Today, however, development costs can easily reach $40 to $80 million. Collett noted that 60 percent of this cost is labor and that the major part of the overall development cost is verification.
Richard, with a great comparison, went on to write:
Anyone who has ever been involved in a home remodeling project knows how hard it is to get a reliable estimate up front of how long it will take and how much it will cost. Underestimating time and cost is commonplace. A large SoC design project is far more complex, with many more stakeholders. There is no simple answer to the question of how development costs can be accurately predicted. But there are some ideas about how to lower development costs.
Tensilica CTO Grant Martin weighed in from the IP perspective, Xilinx VP of Product Development Steve Douglass offered the FPGA perspective, and ASIC designer Sven Andersson from Realtime Embedded AB talked about the value of verified IP blocks. It was a great conversation, and you can hear it in archived form by registering for the event.
There’s some additional information about the panel (we tweeted some highlights during the panel) that have been cataloged under the hash tag #eetsoc.And we’ve published a helpful white paper on how to measure IC development productivity in our online library.
Time really is money in the semiconductor industry, and quantifying schedule risk is an excellent way to maximize your engineering investments.
I had the pleasure of participating in a great online panel yesterday that was part of the EE Times SOC Virtual Conference, attended live by more than 1,500 people. CTO Grant Martin with Tensilica, product-development Vice President Steve Douglass with Xilinx and ASIC and FPGA designer Sven Andersson of Realtime Embedded AB all contributed to robust discussion of where next-generation design is headed.
I encourage you to listen to panel, which is now archived for the next six months.
My point was pretty straight forward:
- If you misunderstand your semiconductor design project’s true cost, your SOC may be doomed.
Think about it: An SOC design today needs to return 10x its investment. There aren’t a lot of huge end markets that justify SOC projects where the costs and schedule aren’t carefully managed. If the design costs $50 million to $80 million to develop, and there’s only a $200 million market, then the design can’t be justified.
So getting your arms around true development cost is what SOC development is all about.
Big changes are occurring before our eyes in the semiconductor world. And while you might say that the industry always has been in a state of flux, understanding the nature of today’s changes is key; reacting properly to that understanding is imperative.
What’s new? In short, it’s a shift in focus: The long transition toward the fabless model is almost complete. With the numbers of semiconductor companies doing their own manufacturing dwindling to a handful, the time has come for executives and engineering managers to figure out where their differentiation now lies within their companies.
Manufacturing used to be one of those differentiators. But today, with everyone buying manufacturing services from TSMC, UMC, Chartered or other foundries, there’s very little differentiation in how ICs are manufactured. But there can be enormous differentiation and value in how they’re designed.
How is this possible, in a world of well-established design-automation tools and methodologies? One approach is to bring more predictability and productivity to design projects and teams; to help engineering managers get insightful, relevant data early in the design decision-making process; and to enable a portfolio of designs to be centrally managed efficiently. That’s our business, and it’s a topic I’ll explore in detail this Wednesday (Sept. 16) during EE Times’ SoC Virtual Conference.
I’ll be presenting on a panel (Economics of Next-Generation SOC Design: A Node Too Far? 2-3 p.m. PDT) with Grant Martin, chief scientist, Tensilica; Steve Douglass, vice president, product development, Xilinx; and Sven Andersson, ASIC FPGA designer, Realtime Embedded AB. The panel will be moderated by EE Times’ Online Editor Dylan McGrath.
If you want a peek at some of what will inform my presentation, take a look at our Numetrics solutions page for starters. And then think about the implications of these two statistics:
* 60 percent of IC projects slip at least one quarter.
* 16 percent of IC projects slip more than one year.
I hope to see you live Wednesday during the virtual panel!