Making Sense of Data II

Making Sense of Data II
The Making Sense of Data series fills a current gap in the market for easy-to-use books for non-specialists that combine advanced data mining methods, the application of these methods to a range of fields, and hands-on tutorials. Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications offers a comprehensive collection of advanced data mining methods coupled with tutorials for applications in a range of fields including business and finance. This book is appropriate for students and professionals in the many different disciplines involving making decisions from data.

Continue reading

Xcelsius Present 2008 Win NUL 50+ Units

Xcelsius Present 2008 Win NUL 50+ Units

Xcelsius Present (full product) 2008 for Windows (Named User License for 50+ Units). This product is available through the Crystal Volume Licensing Program (CVLP). Pricing is per license. Minimum of 50 licenses. Xcelsius Present is a point-and-click data visualization software designed to create flash-based interactive data presentations from ordinary spreadsheets and share them via Microsoft Office and Adobe PDF. It enables business users to create professional-looking presentations and shed light on possible business decisions with the power of what-if scenarios. Create professional-looking presentations in just a few minutes Transform rows of data into interactive charts, maps, and more Make informed decisions by exploring what-if scenarios Engage, inform, and persuade your audience with stunning visualizations Design advanced data visualizations by using more than 50 pre-built analytics Get started instantly with pre-built templates included with the application

Continue reading

Adobe Photoshop CS3 Extended AE

Adobe Photoshop CS3 Extended AE

$IMAGE$ Adobe Photoshop CS3 Extended Academic  UPC:   Mnfg. Part No:   Manufacturer: Adobe  Packaging: OEM DVD Case   Platform: Windows   Availability: In Stock  Product ID #:  30204  $ORDER$ Ideal for film, video, and multimedia professionals and graphic and web designers using 3D and motion, as well as professionals in engineering and science, Adobe Photoshop CS3 Extended software delivers everything in Photoshop CS3 and more. Render and incorporate 3D images into your 2D composites. Stop time with easy editing of motion graphics on video layers. And probe your images with measurement, analysis, and visualization tools. Nondestructive editing Edit nondestructively with new Smart Filters, which let you visualize different image effects, and Smart Objects, which let you scale, rotate, and warp raster and vector graphics all without altering the original pixel data. Rich painting and drawing toolset Create or modify images with a wide assortment of professional, fully customizable paint settings, artistic brushes, and drawing tools. Advanced compositing Create more accurate composites by automatically aligning multiple Adobe Photoshop layers or images based on similar content. The Auto-align Layers command quickly analyzes details and moves, rotates, or warps layers to align them perfectly, and the Auto-blend Layers command blends the color and shading to create a smooth, editable result. 3D compositing and texture editing Easily render and incorporate rich 3D content into your 2D composites even edit existing textures on 3D models directly within Photoshop Extended and immediately see the results. Photoshop Extended supports common 3D interchange formats, including 3DS, OBJ, U3D, KMZ, and COLLADA, so you can import, view, and interact with most 3D models. Movie Paint Enhance video directly within Photoshop Extended. Now you can paint, add text, and clone over multiple frames of an imported video sequence. 2D and 3D

Continue reading



Manufacturer: INTEL PRODUCT: INTEL SEATTLE II DESKTOP MOTHERBOARD PN: SE440BX2NAV Manufacturer PN: SE440BX2NAV MCS Number: 196599 Detail: Intel SE440BX-2 motherboard delivers 100-MHz system level bandwidth to optimize the performance of Pentium III processors. This product has recently been updated to provide support for the new Pentium III processors that incorporate Advanced Transfer Cache. With support for the Accelerated Graphics Port and the increased throughput of the 100-MHz system bus, the SE440BX-2 motherboard delivers enhanced system performance for the demanding system applications of today and tomorrow. Proven System Performance with the 100-MHz System Bus The SE440BX-2 motherboard features the Intel 440BX AGPset which supports a 100-MHz system bus, improving the bandwidth between the Pentium III processor, Accelerated Graphics Port, 100-MHz SDRAM and PCI bus. The result is significantly enhanced media and graphics performance. Home PC users will enjoy improved texture rendering in 3D software, including games, entertainment, educational and digital-imaging applications. Business PC users will appreciate smooth performance in such 3D applications as CAD programs, as well as in sophisticated data visualization and web-authoring tools. The SE440BX-2 motherboard also provides OEMs and system integrators with excellent design flexibility by supporting both 66-MHz and 100-MHz system designs. This flexible platform accommodates different memory and processor combinations, allowing 100-MHz components to be added as needed – even at a later date. Moreover, the SE440BX-2 motherboard is designed to meet the demanding needs of multiple end-users. The board features an on-board hardware management ASIC, Wake on LAN* header, and Intel LANDesk Client Manager software to lower the total cost of ownership. The SE440BX-2 motherboard also offers optional integrated Yamaha* PCI audio for an exceptional AC’97 audio subsystem

Continue reading

Need comparison between 2.4GH Core 2 and 1.6GH Core 2 Quad Laptop?

I am a Data Warehousing Architect, I usually work on teraData, Sql Server and Oracle Databases.
I use Dundas chart for visualization, my desktop PC annoyed me due to its low performence.
I need to buy a high performence laptop, but I also get know that applications that are not developed for quad processor will not get benifet from quad processor, therefore I am confuse whether I should by 2.4GH Core 2 or 1.6GH Core 2 Quad Laptop, laptop I plan to buy can be found on following link ( )

Kindly answer me with respect to applications I am using.
or suggest me any other laptop having approximatly 1GB video memory, long battery life and its price below 1000$

I would go with the quad anyway, yes it provides no benefit if the program cannot utilize four threads. But it will still provide headroom if multitasking and future benefits if you purchase software that can utilize all four cores.

Paperback, Yahoo! Web Analytics: Tracking, Reporting, and Analyzing for Data-driven Insights

Paperback, Yahoo! Web Analytics: Tracking, Reporting, and Analyzing for Data-driven Insights

Yahoo! Web Analytics teaches readers how to collect data, report on that data, and derive useful insights using Yahoo! s free Web analytics tool . This detailed resource from Yahoo! s Director of Data Insights discusses the why of Web analytics as well as the how while revealing secrets and tricks not documented elsewhere. The thorough book also offers step-by-step instructions and advanced techniques on everything from using data collection groupings to creating compelling data visualizations. It s a must-read for all analytics professionals and those who want to be.

Continue reading

IBM ILOG Elixir V2.5-Upgrade-Commercial-All Available Platforms

IBM ILOG Elixir V2.5-Upgrade-Commercial-All Available Platforms

by IBMIBM(r) ILOG(r) Elixir V2.5 provides 11 graphical data-display components for custom Adobe(r) Flex(r) 3 and Adobe AIR(r) rich Internet application (RIA) development. It includes 3D charts, vector maps, gauges, calendar displays, heat maps, OLAP and pivot charts, organization charts, treemaps, radar charts, and Gantt resource and task charts for building more intuitive, interactive data visualization and decision support displays. Already own ILOG Elixir V1?IBM ILOG Elixir V2.5 adds powerful new components such as calendar displays, Gantt task charts, heat maps, and OLAP and pivot charts that will help your users understand data more quickly while potentially saving you weeks and months of development time. It also features numerous enhancements to V1 components.

Continue reading

Data Mining : an Overview


Every organization  accumulates huge volumes of data from  a variety of sources on a daily basis. Data Mining is an iterative process of creating predictive and descriptive models, by uncovering previously unknown trends and patterns in vast amounts of data from across the enterprise, in order to support decision making. Text mining applies the same analysis techniques to text-based documents. The knowledge gleaned from data and text mining can be used to fuel strategic decision making. During last decade a number of knowledge discovery systems were created which detect structure hidden in data in form of functional dependencies between attributes and formulate them as mathematical equations or other symbolic rules. One of most developed systems which can discover very complex and diverse equations, solves systematically problems of data error analysis and evaluates statistical significance of obtained results is designed to discovers empirical laws in data in form of functional programs constructed from standard and user-defined functional primitives. Although the systems which discover numerical dependencies in data use diverse knowledge representation formalisms and search methods they face the same set of difficulties inherent to their approach. Traditional document and text management tools are inadequate to meet the utilities. Document management systems work well with homogeneous collections of documents but not with the heterogeneous mix that knowledge workers face every day.

Even the best Internet search tools suffer from poor precision and recall.

2. An Architecture for Data Mining

To best apply these advanced techniques, they must be fully integrated with a data warehouse as well as flexible interactive business analysis tools. Many data mining tools currently operate outside of the warehouse, requiring extra steps for extracting, importing, and analyzing the data. Furthermore, when new insights require operational implementation, integration with the warehouse simplifies the application of results from data mining. The resulting analytic data warehouse can be applied to improve business processes throughout the organization, in areas such as promotional campaign management, fraud detection, new product rollout, and so on. Figure 1 illustrates an architecture for advanced analysis in a large data warehouse.

Figure 1 – Integrated Data Mining Architecture

The ideal starting point is a data warehouse containing a combination of internal data tracking all customer contact coupled with external market data about competitor activity. Background information on potential customers also provides an excellent basis for prospecting. This warehouse can be implemented in a variety of relational database systems: Sybase, Oracle, Redbrick, and so on, and should be optimized for flexible and fast data access.

An OLAP (On-Line Analytical Processing) server enables a more sophisticated end-user business model to be applied when navigating the data warehouse. The multidimensional structures allow the user to analyze the data as they want to view their business – summarizing by product line, region, and other key perspectives of their business. The Data Mining Server must be integrated with the data warehouse and the OLAP server to embed ROI-focused business analysis directly into this infrastructure. An advanced, process-centric metadata template defines the data mining objectives for specific business issues like campaign management, prospecting, and promotion optimization. Integration with the data warehouse enables operational decisions to be directly implemented and tracked. As the warehouse grows with new decisions and results, the organization can continually mine the best practices and apply them to future decisions.

2.1. The Scope of Data Mining

Data mining derives its name from the similarities between searching for valuable business information in a large database — for example, finding linked products in gigabytes of store scanner data — and mining a mountain for a vein of valuable ore. Both processes require either sifting through an immense amount of material, or intelligently probing it to find exactly where the value resides. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by providing these capabilities:

2.2. Capabilities:

  • Automated prediction of trends and behaviors. Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data — quickly. A typical example of a predictive problem is targeted marketing. Data mining uses data on past promotional mailings to identify the targets most likely to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default, and identifying segments of a population likely to respond similarly to given events.

  • Automated discovery of previously unknown patterns. Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of retail sales data to identify seemingly unrelated products that are often purchased together. Other pattern discovery problems include detecting fraudulent credit card transactions and identifying anomalous data that could represent data entry keying errors.

Data mining techniques can yield the benefits of automation on existing software and hardware platforms, and can be implemented on new systems as existing platforms are upgraded and new products developed. When data mining tools are implemented on high performance parallel processing systems, they can analyze massive databases in minutes. Faster processing means that users can automatically experiment with more models to understand complex data. High speed makes it practical for users to analyze huge quantities of data. Larger databases, in turn, yield improved predictions. Databases can be larger in both depth and breadth:

  • More columns. Analysts must often limit the number of variables they examine when doing hands-on analysis due to time constraints. Yet variables that are discarded because they seem unimportant may carry information about unknown patterns. High performance data mining allows users to explore the full depth of a database, without preselecting a subset of variables.

  • More rows. Larger samples yield lower estimation errors and variance, and allow users to make inferences about small but important segments of a population.

A recent Gartner Group Advanced Technology Research Note listed data mining and artificial intelligence at the top of the five key technology areas that “will clearly have a major impact across a wide range of industries within the next 3 to 5 years.”2 Gartner also listed parallel architectures and data mining as two of the top 10 new technologies in which companies will invest during the next 5 years. According to a recent Gartner HPC Research Note, “With the rapid advance in data capture, transmission and storage, large-systems users will increasingly need to implement new and innovative ways to mine the after-market value of their vast stores of detail data, employing MPP [massively parallel processing] systems to create new sources of business advantage (0.9 probability).”3

3. The most commonly used techniques in data mining are:

  • Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.

  • Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID) .

  • Genetic algorithms: Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of evolution.

  • Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k ³ 1). Sometimes called the k-nearest neighbor technique.

4.Text mining)™ Techniques

The key techniques of text mining include:

1. Feature extraction

2. Thema tic indexing

3. Clustering

4. Summarization

These four techniques are essential because they solve two key problems with applying text mining: they make textual information accessible, and they reduce the volume of text

that must be read by end users before information is found. Feature extraction deals with finding particular pieces of information within a text. The target information can be of a general form such as type descriptions or business of the former, while be pattern-driven. For example, applications analyzing merger and acquisition stories may extract names of the companies involved, cost, funding mechanisms and whether or not regulatory approval is required. Thematic indexing uses knowledge about the meaning of words in a text to identify broad topics covered in a document. For example, documents about aspirin and might be both classified under pain relievers or analgesics. Thematic indexing such as this is often implemented using multidimensional taxonomies. Taxonomy, in the text mining sense, is a hierarchical knowledge representation scheme. This construct, sometimes called ontology to distinguish it from navigational taxonomies such as Yahoo’s, provides the means to search for documents about a topic instead of documents with particular keywords. For example, an analyst researching mobile communications should be able to search for documents about wireless protocols without having to know key phrases such as wireless application protocol (WAP). Clustering is another text mining technique with applications in bus in  intelligence. Clustering groups similar documents according to dominant features. In text mining and information retrieval, a weighted feature vector is frequently used to describe a document. These feature vectors contain a list of the main themes or keywords along with a numeric weight indicating the relative importance of the theme or term to the document as a whole. Unlike data mining applications which use a fixed set of features for all analyzed items (e.g. age, income, gender, etc.), documents are described with a small number of terms or themes chosen from potentially thousands of possible dimensions. There is no single, best way to deal with document clustering; but three approaches are often used: hierarchical clusters, binary clusters and self-organizing maps. Hierarchical clusters [3] use a set-based approach. The root of the hierarchy is the set of all documents in a collection, and the leaf nodes are sets with individual documents. Intervening layers in the leaf nodes have progressively larger sets of documents, grouped by similarity. In binary clusters each document is in one and only one cluster, and clusters are created to maximize the similarity measures between documents in a cluster and minimize the similarity measure between documents in different clusters. Self-organizing maps (SOMs) use neural networks to map documents from sparse high-dimensional spaces into

Two-dimensional maps. Similar documents tend to the same position in the two dimensional grid. The last text mining technique is summarization. The purpose of summarization is to describe the content of a document while reducing the amount of text a user must read. The main ideas of most documents can be described with as little as 20 percent of the original text. Little is lost by summarizing. Like clustering, there is no single summarization algorithm. Most use morphological analysis of words to identify the most frequently used terms while eliminating words that carry little meaning, such as the articles the, an and a. Some algorithms weight terms used in opening or closing sentences more heavily than other terms, while some approaches look for key phrases that identify

5. Application Areas of TM

From government and legislative organizations, to corporations and universities, and to journalists, writers and college students, we all create, store, retrieve, and analyze texts. Hence, numerous organizations are faced with various document management and text analysis tasks. Consider a few simple examples: ·  Internet search engines could deliver much better quality results by accepting and  making sense of natural language queries. If documents found in response to a query were analyzed semantically for their relevance in the context of the original query, it could significantly increase the precision of the search: instead of finding a knockout amount of more than 10,000 documents in response to your query, the system could provide you with a short list of the most relevant documents. ·  Call center specialists have to understand customer support questions, quickly select relevant documents among available manuals, frequently asked questions lists, and engineering notes, and retrieve those bits of knowledge that help answer the question. An automated system for categorizing available materials and retrieving the most relevant fragments matching natural language questions could save hundreds of thousands of man-hours and dramatically reduce response time. Identifying the best fragments through thesauruses and anthologies could significantly improve recall, or the thoroughness of the search.  Lawyers, insurers and venture capitalists often have to quickly grasp the meaning of cases, claims and proposals, correspondingly. They need to improve the quality of querying the Web and diverse databases to find and retrieve relevant documents. Their practice could benefit tremendously from automated summarization of texts and feature extraction, when key points from the text are organized in a database holding meta-information to improve future access to knowledge contained in documents.  Researching medical journals for new hypotheses of cause and effect for a disease is an ideal case of what text mining ought to be able to do.  Intelligent Email Routing, Automatic Chat Rooms Monitoring, Web pages monitoring are all important appli

5.1. Grand challenges for text mining.

Text Mining is an exciting research area that tries to solve the information overload problem by using techniques from data mining, machine learning, NLP, IR and knowledge management. Text Mining involves the preprocessing of document collections (text categorization, information extraction, term extraction), the storage of the intermediate representations, the techniques to analyze these intermediate representations (distribution analysis, clustering, trend analysis, association rules etc) and visualization of the results. Here are some of the challenges that are facing the text mining research area:

5.2.Challenge 1: Entity Extraction.

 Most text analytics systems rely on accurate extraction of entities and relations from the documents. However, the accuracy of the entity extraction systems in some of the domains reaches only 70- 80% and creates a noise level which prohibits the adaptation of text mining systems by a wider audience. We are seeking domain independent and language independent NER (named entity recognition) systems that will be able to reach an accuracy of 99-100%. Based on such system, we are seeking domain independent and language independent relation extraction systems that will be able to

reach precision of 98-100% and recall of 95-100%. Since the systems should work in any domain they must be totally autonomous and require no human intervention.

5.3.Challenge 2: Autonomous Text Analysis.

Text Analytics systems today are pretty much user guided, and they enable users to view various aspects of the corpus. We would like to have a text analytics system which is totally autonomous and will analyze huge corpuses and come up with truly interesting findings that are not captured by any single document in the corpus and are not known before. The system can utilize the internet to filter findings that are already known. The “interest” measure which is totally subjective

will be defined by a committee of experts in each domain. Such systems can then be used for alerting purposes in the financial domain, the anti-terror domain, the biomedical domain and many other commercial domains. The system will get streams of documents from a variety of sources and send emails to relevant people if an “interesting” finding is detected. Based on systems developed in step 1 & 2, we would like to have (this is our text mining grand challenge)

6. Conclusion

Mining texts in different languages is a major problem, since text mining tools should be able to work with many languages and multilingual documents. Integrating a domain knowledge base with a text mining engine would boost its efficiency, especially in the information retrieval and information extraction phases. Acquiring such knowledge implies effective querying of the documents as well as the combination of information pieces from different textual sources (e.g.: the World Wide Web). Discovering such hidden know ledge is an essential requirement for many corporations, due to its wide spectrum of applications

7. References

1. Jochen Dorre, Peter Gersti, Roland Seiffert (1999), Text Mining: Finding Nuggets in Mountains of Textual Data, ACM KDD 1999 in San Diego, CA, USA.

2. Ah-Hwee Tan, (1999), Text Mining: The state of art and the challenges, In

proceedings, PAKDD’99 Workshop on Knowledge discovery from Advanced

Databases (KDAD’99), Beijing, pp. 71-76, April 1999.

3. Danial Tkach, (1998), Text Mining Technology Turning Information Into

Knowledge A white paper from IBM .

4. Helena Ahonen, Oskari Heinonen, Mika Klemettinen, A. Inkeri Verkamo, (1997),

Applying Data Mining Techniques in Text Analysis, Report C-1997-23,

Department of Computer Science, University of Helsinki, 1997

5. Mark Dixon, (1997), An Overview of Document Mining Technology,


The Object-Oriented Approach to the Medical Real Time System Design, Proceedings of MIE-91, In:

Lecture Notes in Medical Informatics, Springer-Verlag, Berlin, v.45, pp 508-512


Integrating Quantitative and Qualitative Discovery in the ABACUS System In: Y.Kodratoff,

R.S.Michalski, (Eds.): Machine Learning: An Artificial Intelligence Approach (Volume III). San

Mateo, CA: Kaufmann, pp 153-190.

KISELEV, M.V. (1994)

PolyAnalyst – a Machine Discovery System Inferring Functional Programs, Proceedings of AAAI

Workshop on Knowledge Discovery in Databases’94, Seattle, pp 237-249.


PolyAnalyst – a Machine Discovery System for Intelligent Analysis of Clinical Data, ESCTAIC-4

Abstracts (European Society for Computer Technology in Anaesthesiology and Intensive Care),

Halkidiki, Greece, p. H6.


Scientific discovery: Computational explorations of the creative processes. Cambridge, MA: MIT


Mr. Chandrakant R. a librarian in Godavari college of engineering Jalgaon Maharashtra. He carries with him 11 years experience of teaching &Librarianship. He has been associated with KLA.(khandesh Library Association) He has published six national and international paper. His area of interest of library automation &Digitization.


Chandrakant R Sapute