DATA WAREHOUSE AND MINING Short Notes Data Objects and Attribute Types, Basic Statistical Descriptions of Data, Data Visualization, Measuring Data Similarity and Dissimilarity

 

DATA WAREHOUSE AND MINING

Unit II: Getting to Know Your Data:

Covered Topics: Data Objects and Attribute Types, Basic Statistical Descriptions of Data, Data Visualization, Measuring

Data Similarity and Dissimilarity. 

Data Objects and Attribute Types:

In the context of data mining and databases, data objects refer to the entities or items that are described and analyzed. These objects could represent anything from customers and products to events or transactions. Each data object is characterized by a set of attributes that describe its properties. Attribute types, on the other hand, define the nature of the information stored in these attributes. Here's a breakdown of these concepts:

Data Objects:

Data objects are entities that are described or analyzed in a dataset. They can be tangible entities, such as customers or products, or intangible entities, such as events or transactions. In a database or data warehouse, each data object is typically represented as a record or row in a table.

Examples of Data Objects:

1. Customer: A data object in a customer database, with attributes like name, age, and address.

2. Product: A data object representing a product, with attributes like product ID, price, and category.

3.Transaction: A data object representing a financial transaction, with attributes like transaction ID, date, and amount.


Attributes:

Attributes are the characteristics or properties that describe a data object. Each data object has a set of attributes associated with it, and these attributes provide information about the object. Attributes can have different types, such as numerical, categorical, or textual.


Examples of Attributes:

1. Customer Attributes:

  •    - Name (textual)
  •    - Age (numerical)
  •    - Address (textual)
  •    - Customer ID (categorical)


2. Product Attributes:

  •    - Product ID (categorical)
  •    - Price (numerical)
  •    - Category (categorical)
  •    - Manufacturer (textual)


3. Transaction Attributes:

  •    - Transaction ID (categorical)
  •    - Date (date)
  •    - Amount (numerical)
  •    - Payment Method (categorical)


Attribute Types:

Attributes can be classified into different types based on the nature of the information they represent. Common attribute types include:


1. Nominal Attributes:

  • Represents categories with no inherent order. Example: Color (Red, Green, Blue).


2. Ordinal Attributes:

  • Represents categories with a meaningful order but no fixed interval. Example: Education Level (High School, Bachelor's, Master's).


3. Interval Attributes:

  • Represents numerical values with a consistent interval but no true zero point. Example: Temperature in Celsius.


4. Ratio Attributes:

  • Represents numerical values with a consistent interval and a true zero point. Example: Age, Income.


5. Categorical Attributes:

Represents categories or labels. Can be nominal or ordinal.


6. Numerical Attributes:

  • Represents numerical values. Can be interval or ratio.


7. Textual Attributes:

  • Represents text data. Example: Description, Comments.


Understanding the types of data objects and attributes is fundamental to the process of data mining, as it guides the selection of appropriate techniques and methods for analyzing and extracting patterns from the data. Different types of attributes may require different approaches during the preprocessing and analysis stages.

Basic Statistical Descriptions of Data

Basic statistical descriptions provide a summary of the main characteristics of a dataset, offering insights into its central tendency, variability, and distribution. Here are some fundamental statistical descriptions of data:

1. Measures of Central Tendency:

These measures describe the center or average of a dataset.

  • Mean (Arithmetic Average): The sum of all values divided by the number of values. It is sensitive to extreme values. Mean==1

  • Median: The middle value in a sorted dataset. It is less affected by extreme values.

    • For an odd-sized dataset: the middle value.
    • For an even-sized dataset: the average of the two middle values.
  • Mode: The value(s) that occur most frequently in the dataset.

2. Measures of Dispersion or Variability:

These measures describe how spread out the values in a dataset are.

  • Range: The difference between the maximum and minimum values in the dataset. Range=MaxMin

  • Variance: The average of the squared differences from the mean. It measures the overall variability. Variance(2)==1(ˉ)2

  • Standard Deviation: The square root of the variance. It provides a more interpretable measure of spread. Standard Deviation()=Variance

  • Interquartile Range (IQR): The range between the first quartile (Q1) and the third quartile (Q3). It is less sensitive to outliers than the range. IQR=31

3. Measures of Shape and Distribution:

These measures provide insights into the shape of the dataset's distribution.

  • Skewness: A measure of the asymmetry of the distribution. Positive skewness indicates a right-skewed distribution, while negative skewness indicates a left-skewed distribution. Skewness==1(ˉ)3(=1(ˉ)2)3/2

  • Kurtosis: A measure of the "tailedness" or sharpness of the distribution. Positive kurtosis indicates heavy tails, while negative kurtosis indicates light tails. Kurtosis==1(ˉ)4(=1(ˉ)2)23

4. Percentiles and Quartiles:

  • Percentiles: Values below which a given percentage of observations fall.

    • For example, the 25th percentile (Q1) is the value below which 25% of the data falls.
  • Quartiles: Values that divide the dataset into four equal parts.

    • Q1: 25th percentile
    • Q2: 50th percentile (median)
    • Q3: 75th percentile

5. Covariance and Correlation:

These measures describe the relationship between two variables.

  • Covariance: Measures the degree to which two variables vary together. Cov(,)==1(ˉ)(ˉ)

  • Correlation Coefficient (Pearson): Normalized measure of covariance, ranging from -1 to 1. Correlation(,)=Cov(,)Var()Var()

These basic statistical descriptions are essential tools for summarizing and interpreting datasets, providing a foundation for more advanced statistical analyses and data mining techniques.

Data Visualization

Data visualization is the presentation of data in a graphical or pictorial format to facilitate understanding and interpretation. It transforms raw data into visual representations such as charts, graphs, maps, and dashboards, making complex information more accessible and comprehensible. Effective data visualization plays a crucial role in data analysis, decision-making, and communication. Here are key aspects of data visualization:


1. Types of Data Visualizations

  • Bar Charts: Display data using rectangular bars of different lengths.
  • Line Charts: Connect data points with lines, useful for showing trends over time.
  • Pie Charts: Display parts of a whole, with each slice representing a percentage.
  • Scatter Plots: Show the relationship between two variables with points on a 2D plane.
  • Histograms: Visualize the distribution of a continuous dataset.
  • Heatmaps: Represent data values in a matrix using color intensity.
  • Box Plots: Display statistical information about the distribution of a dataset.
  • Bubble Charts: Extend scatter plots by adding a third dimension using bubble size.


2. Benefits of Data Visualization:

  • Pattern Recognition: Visualizations help identify patterns and trends in data.
  • Storytelling: Visuals enhance the storytelling aspect of data, making it more engaging.
  • Decision-Making: Clear visuals support better decision-making by conveying information efficiently.
  • Communication: Complex data is communicated more effectively through visual representations.
  • Exploration: Interactive visualizations allow users to explore data and derive insights.


3. Tools for Data Visualization:

  • Tableau: Powerful and widely used for creating interactive dashboards and reports.
  • Microsoft Power BI: Enables data visualization and business intelligence.
  • Google Data Studio: Free tool for creating customizable reports and dashboards.
  • Matplotlib and Seaborn: Python libraries for creating static and interactive visualizations.
  • D3.js: JavaScript library for creating dynamic and interactive visualizations.
  • Plotly: Python and JavaScript libraries for creating interactive plots.


4. Best Practices in Data Visualization:

  • Simplicity: Keep visualizations clear and straightforward to avoid confusion.
  • Relevance: Focus on presenting information that aligns with the message or goal.
  • Consistency: Maintain consistent color schemes, fonts, and labels for a professional look.
  • Interactivity: Use interactive features when appropriate to allow users to explore data.
  • Annotations: Provide context and explanations through annotations.
  • Accessibility: Ensure visualizations are accessible to a diverse audience.

5. Challenges in Data Visualization:

  • Misinterpretation: Poorly designed visualizations may lead to misinterpretation of data.
  • Overcomplexity: Overly complex visuals can confuse rather than clarify information.
  • Biases: Unintentional biases in visualizations can mislead audiences.
  • Choosing the Wrong Visualization: Selecting an inappropriate type of visualization for the data.

Data visualization is a dynamic field, continually evolving with advancements in technology and design. When done effectively, it empowers individuals and organizations to gain insights, communicate findings, and drive informed decision-making.

Measuring Data Similarity and Dissimilarity.

Measuring data similarity and dissimilarity is crucial in various fields, including data mining, machine learning, and pattern recognition. These measures help quantify the resemblance or dissimilarity between data points, facilitating tasks such as clustering, classification, and recommendation. Here are several common methods for measuring data similarity and dissimilarity:

1. Euclidean Distance:

  • Similarity: Inverse of Euclidean distance.
  • Formula: Similarity(,)=11+Euclidean Distance(,)
  • Euclidean Distance Formula: =1()2

2. Manhattan Distance (City Block Distance):

  • Formula: =1

3. Minkowski Distance:

  • Formula: (=1)1/
  • Special Cases:
    • =1: Manhattan Distance
    • =2: Euclidean Distance

4. Cosine Similarity:

  • Measures the cosine of the angle between two vectors.
  • Formula: Cosine Similarity(,)=

5. Jaccard Similarity (for Sets):

  • Measures the similarity between two sets.
  • Formula: Jaccard Similarity(,)=

6. Hamming Distance (for Binary Data):

  • Measures the number of positions at which corresponding bits are different.
  • Formula: Hamming Distance(,)==1,
  • , is the Kronecker delta function (1 if =, 0 otherwise).

7. Pearson Correlation Coefficient:

  • Measures linear correlation between two variables.
  • Formula: Correlation(,)=Cov(,)

8. Hamming Loss:

  • Measures the fraction of labels that are different between two instances.
  • Formula: Hamming Loss(,)=1=1,

9. Edit Distance (Levenshtein Distance):

  • Measures the minimum number of single-character edits required to transform one string into another.

10. Mahalanobis Distance:

  • Accounts for correlations between variables and different variances.
  • Formula: Mahalanobis Distance(,)=()Σ1()
  • Σ is the covariance matrix.

11. Correlation Distance:

  • Correlation Distance(,)=1Correlation(,)

12. Earth Mover's Distance (Wasserstein Distance)

  • Measures the minimum cost to transform one distribution into another.

Choosing the most appropriate measure depends on the nature of the data and the specific requirements of the analysis. Different measures may be more suitable for different types of data, such as numerical, categorical, or binary data. The choice often involves considering factors like the scale of the data, the presence of outliers, and the desired sensitivity to various data characteristics.