QTIP: فرآیندهای هوش تکنولوژی سریع
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|13493||2005||12 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 5127 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Technological Forecasting and Social Change, Volume 72, Issue 9, November 2005, Pages 1070–1081
Empirical technology analyses need not take months; they can be done in minutes. One can thereby take advantage of wide availability of rich science and technology publication and patent abstract databases to better inform technology management. To do so requires developing templates of innovation indicators to answer standard questions. Then, one can automate routines to generate composite information representations (“one-pagers”) that address the issues at hand, the way that the target users want.
How long does it take to provide a particular Future-oriented Technology Analysis (FTA)? We traditionally perceived the answer calibrated in months, particularly for empirical technology analyses. This mindset contributes to many technology management or policy decisions relying primarily upon intuitive sources of knowledge. That need no longer be the case. This paper makes the case for quick text mining profiles of emerging technologies. I describe what we call “tech mining”-deriving technology intelligence especially from R&D information resources  and . The phenomenon of interest is speed, but with provision of information that truly facilitates technology management. The time to conduct certain technology analyses can be reduced from months to minutes by taking advantage of four factors enabling QTIP—Quick Technology Intelligence Processes: 1) instant database access, 2) analytical software, 3) automated routines, and 4) decision process standardization. The first QTIP factor concerns information availability. A defining characteristic of the “Information Economy” is enhanced access to information. Of particular note to FTA, the great science and technology (S&T) databases cover a significant portion of the world's research output. These databases can be searched from one's computer, enabling retrieval of electronic records in seconds. Many organizations have unlimited use licenses to particular databases that allow for thousands of records to be located and downloaded on a given topic at no additional costs. Various databases compile information on journal and conference papers, patents, R&D projects, and so forth. In addition, many researchers share information via the Internet (e.g., physicists increasingly post their papers at arXiv.org). Other databases cover policy, popular press, and business activities. These can be exploited to help understand contextual factors affecting particular technological innovations. All told, this wealth of information enables potent technological intelligence analyses. The second QTIP factor consists of expedited analyses using one form of “tech mining” software. This paper employs VantagePoint, but the specifics are less important than the principles. Namely, many aspects of data cleaning, statistical analyses, trend analyses, and information visualization can be done quite briskly. The third contributing factor, automated routines, makes a huge difference. As a loose analogy, consider the change from the hand-made automobile to the assembly line Model T Ford beginning in 1908. Once we identify a set of analytical steps that we want to do repeatedly, we can script (write software programs or macro's) that automate those steps. Now the analyst devotes energies to refining results, presenting them effectively, and interpreting them. For instance, suppose we have a certain S-shaped growth model that we find highly informative for a particular family of technology forecasts. We now “push a button” to generate and plot such a model. We then inspect it, decide a different growth limit should be investigated, and “push the button” again. In a minute or so, we can examine several alternatives, select the one(s) for presentation, extrapolate to offer a range of future possibilities, and give our interpretation. As with the Model T, standardizing greatly expedites production and enables automation. The fourth factor profoundly changes the receptivity to empirical analyses. A major impediment to the utilization of FTA results is their unfamiliarity to managers and policy-makers. Today, major organizations are standardizing certain strategic technology and business decision processes. Stage-gate approaches set forth explicit decisions to be sequenced toward particular ends (e.g., new product development). Furthermore, we see organizations going the next step—to require specific analyses and outputs at each stage. This facilitates the automated routines (factor three). But, even more importantly, it familiarizes users with data-based technology analyses The manager who gets the prescribed FTA outputs upon which to base particular technology management decisions comes to know them. (S)he develops understanding of their strengths and limitations, and, thus, how best to use this derived knowledge to make better decisions. In this way technology intelligence gains credibility as a vital decision aid. The Model T analogy carries over here too (loosely)—the availability of this standard vehicle enables an efficient infrastructure to develop around it. Likewise, the established technology decision framework constitutes the fourth factor needed for QTIP-decision process standardization. The next section illustrates what it takes to produce composite empirical responses to particular technology management questions, quickly.
نتیجه گیری انگلیسی
This paper illustrates how to compose informative decision support from empirical information concerning various facets of an emerging technology—quickly. Collectively, the integration of the four QTIP factors results in a qualitative change in FTA. We know of a major corporation that reduced its time to provide a key set of competitive technological intelligence (CTI) analyses from 3 months to 3 days. With another firm, we have been exploring text mining tool applications. We mutually recognized that certain preliminary analyses could be done in 3 minutes, enabling refinement of information searches that would drastically upgrade subsequent FTA work. These two examples reflect an essential difference. The “3-day” QTIP addresses the technology information needs of end-users, such as senior technology managers or policy-makers. They would not be expected to perform the analyses themselves. In contrast, the second “3-min” example indicates that others engaged in technology analyses have special needs too. The “quick” in this case serves the person performing the search and analysis. Design of QTIP tools and functions must address the diverse needs of all the players. “Process management” factors should be considered for all types of QTIP players: • information providers (e.g., meeting their needs for profits and protection of their intellectual property), • information professionals (e.g., in coordinating licenses and access to databases and analytical tools), • technology analysts (e.g., power users of these capabilities on a regular basis), • researchers, technologists, and some managers (e.g., occasional users of the databases and analytical tools), • decision-makers (e.g., policy-makers and managers who weigh emerging technology considerations as either their main focus or as contributing factors, but do not perform the analyses personally). Process management calls for explicit attention to how the analyses and their outputs can best be organized to enhance utility. Technology analysts need to think beyond what constitutes valid and impressive analyses to what their target users want and what mechanisms can best communicate to them  and . A key principle is to maximize engagement and ongoing interaction of the QTIP players with each other. Recognition of the potential for speedy analyses should lead to rethinking the bases for technology management (MOT). Over the past decades, many management domains have come to rely quite heavily upon empirical evidence. For example, manufacturing process management used to depend completely on tacit knowledge. A supervisor spent decades gaining familiarity with his (or occasionally her) machines, people, and processes. He “knew” if something was not working right and initiated repairs accordingly. What could be better than this deep, personal knowledge? Well, it turned out that actual data were better. Compiling and making available performance histories for machines and processes enabled modern Quality Control (QC). When the potential was recognized, process managers realized that dramatic improvements in quality were possible. There would be no “Six Sigma” quality standards without empirical manufacturing process data and analyses thereof. Technology management, somewhat surprisingly, is among the least data-intensive managerial domains. One would think that scientists, engineers, and technology managers would naturally pursue empirical means to manage R&D and its transition into effective innovations. Not at all—even in tracking our own performance, researchers strongly prefer peer judgment to bibliometrics. The technical community has a deep distrust of metrics. This poses an additional challenge to be overcome in implementing empirically informed technology management. Of course, many do use empirical information in S&T arenas. Researchers usually mine the literature to find a few “nuggets” that speak closely to their interests. Patent analysts traditionally sought the few key pieces of intellectual property. Tech mining offers qualitatively different capabilities. It can uncover patterns that reflect competitor strategies . It can also enable researchers and R&D managers to gain a global perspective on entire bodies of research. That can help position research programs and identify complementary efforts by others. On another level, the Dutch government allocates research support to universities based in part upon their publication records. Publications are weighted according to disciplinary journal impact criteria. Journal Citation Reports provide the basis for calculating the merits of individual and unit outputs. This is certainly not a foolproof system but it provides a more objective set of metrics than the “good old boy” peer review mechanisms. Certainly, this “tech mining” approach to quick technology analyses does not equally affect all forms of FTA. This paper explores the potential to expedite certain technological intelligence functions. “Tech mining” exploits the information compiled by S&T and other (e.g., business) databases. As such it represents one advanced form of technology monitoring. This information can serve other FTA needs to various degrees: • Technology Foresight—Quick tech mining can help participants grasp the scope of technology development efforts. Access to results in interactive mode (e.g., using the VantagePoint Reader software) enables digging down to locate specifics on a point of interest—e.g., identifying an active researcher on a particular topic. • Technology Forecasting—QTIP can provide empirical measures for certain trend analyses to support growth model fitting and trend extrapolation. It can also help locate experts to engage in judgmental forecasting. • Technology and Product Roadmapping—QTIP serves background information roles well. It is vital in documenting external technology development activities to track their likely trajectories. It helps devise internal R&D priorities to hit the gaps in external development efforts. • Technology Assessment—Again, QTIP can help scope the extent of R&D activities. Exploiting contextual information resources that cover policy, standards, public concerns, possible health and environmental hazards, and perceived technological impacts can further support TA activities. In sum, tech mining offers partial, but potentially very effective, support for these varied FTA endeavors. QTIP emphasize speed in generating technology analyses. Speed surely must be tempered by need. The sidebar vignette offers a realistic scenario of how this could unfold. The driver is “when do you need to have what information?” Note that this seriously alters relationships and expectations between manager–users and technology analysts. Particularly for academic researchers, we have an inclination to say “we can deliver a fine analysis; it will take two semesters to complete”. Instead, the quick mindset has the user set the defining temporal parameter—the deadline—then we technology analysts fit into that schedule. Most importantly, this changed mindset opens up tremendous potentials for better informed MOT. Sidebar: hypothetical QTIP vignette • 8:00 am: The Vice-President for Research at Georgia Tech asks me to benchmark this university's SOFC research against the leading American universities for a presentation this noon. I get his suggestion on who, on campus, is active in fuel cells. We decide to focus on the last 5 years. He wants 3 PowerPoint slides like those we used last month in a similar benchmarking exercise. • 8:05 am: I finish a quick Dialog “DialIndex” search that identifies which databases contain the most SOFC information. I select two that provide good coverage and are licensed for unlimited use by Georgia Tech. • 8:10 am: I complete simple searches in SCI and EI Compendex, downloading 500-record samples of recent publication abstracts with SOFC in titles or keywords. • 8:15 am: I import each search into VantagePoint and scan the keywords to ascertain if the search should be expanded to include other terms, or restricted to eliminate noise. Inspection of EI Compendex class codes helps determine whether classification-based searching should also be used. Perusal of the organizational affiliations of the authors suggests possible benchmark universities. • 8:40 am: I search a compilation of Georgia Tech publication records to augment the VP's awareness of who is active in fuel cells. I check that my search strategy captures most of the Georgia Tech authored papers to help validate the query. • 8:55 am: I phone around to find one local subject matter expert willing to review my search strategy to spot gaps or other weaknesses. Bill is available for a “3-min” review before class. I e-mail my digest and we discuss on the phone. • 9:00 am: I undertake the ‘final’ searches in SCI and EI Compendex and download hundreds of SOFC records for the most recent 5 years. • 9:30 am: The records are imported into VantagePoint. A script runs data fusion and duplicate removal. An additional script profiles the leading researchers at each of the “Top 3+ Georgia Tech” American universities in the SOFC domain. A comparative 5-year trend script is run. Results are pasted from MS Excel into MS PowerPoint “GT Benchmarking” slide templates. • 10:00 am: An auxiliary search is run on a U.S. Department of Energy R&D projects database for these four universities. A script generates a table showing overall DOE project activity that each university evidences on fuel cells. It generates pie charts showing how much each focuses on SOFCs out of its energy research. • 10:20 am: Bill reviews the 3 PowerPoint slides, and notes that Georgia Tech has collaborated recently with a key researcher at one of the other universities. He notes that we have left out a key Georgia Tech SOFC researcher who leads many sponsored research projects on which open literature publication is not appropriate. • 10:20 am: PowerPoints with interpretive comments, and a short background technical report, are provided to the VP. This paper focuses on the idea that informative mining of S&T information resources can be done quickly and powerfully. Once that is accepted, extensive opportunities arise. The information resources are largely, but not completely, texts. “Text mining” tools are progressing rapidly  and . These draw on both statistical and artificial intelligence approaches. Advanced entity extraction, query refinement, and elucidation of relationships based on text co-occurrence patterns can extend QTIP possibilities. Development of information visualizations especially for S&T offers great potential  and . To close, this “new” method brings to bear available S&T information resources and analytical tools to generate FTA more quickly. Its novelty lies in the approach to technology analyses in support of technology management. To fully realize QTIP potential requires significant process management change: • Systematize strategic business decision processes. • Mandate explicit technology information products be provided for decision stages in such processes. • Provide each researcher, development engineer, project manager, intellectual property analyst, etc. with direct, desktop access to a couple of most useful S&T information databases. • Negotiate unlimited use licenses for those databases. • License easy-to-use analytical software for all. • Script the routine analytical processes. • Develop standard output templates (information visualizations). • Train the potential QTIP participants in use of the tools and resulting FTA outputs. But it is worth the effort. I am convinced that quick “tech mining” can dramatically improve MOT effectiveness. I would go so far as to forecast that the technology manager who relies solely on intuitive information faces extinction. The manager who incorporates data-based intelligence into decision processes will be better informed and that will lead to competitive advantage. We look to this revolutionizing technology management much as the Model T revolutionized production processes.