Norges billigste bøker

Bøker utgitt av Technics Publications

Filter
Filter
Sorter etterSorter Populære
  • av Laura Madsen
    551,-

    Data Governance ist kaputt. Es ist Zeit, dass wir es in Ordnung bringen. Warum ist Data Governance so ineffektiv? Die Wahrheit ist, dass Data-Governance-Programme nicht für die Art und Weise entwickelt wurden, wie unsere Data Teams arbeiten. Sie sind nicht für eine moderne Organisation entwickelt worden. Sie wurden entwickelt, als Reports noch über die interne Firmenpost kamen.Der Datenfluss hinein, innerhalb und heraus von heutigen Organisation ist ein Tsunami, der durch starre Data-Governance-Methoden durchbricht. Dennoch verlassen sich unsere Programme auf einen "command und control"-Ansatz. Haben Sie jemals versucht, einen Tsunami zu kontrollieren? Jede Organisation, die Daten verwendet, weiß, dass Sie Data Governance brauchen. Data-Literacy-Anstrengungen und Gesetzgebungen wie die DSVGO sind die Leithammel für unsere Governance-Funktionen geworden. Wir sitzen aber immer noch in Data-Governance-Meetings mit zu geringem Personal und zu vielen Fragen, um voranzukommen. Es gibt nicht genug Agilität für das Programm, weil wir einen Schwächegrad der Daten implizieren, der nicht existiert. Wir setzen unsere Beharrlichkeit auf archaische Methoden fort, die unseren Unternehmen keinen Mehrwert bringen. Wir können keine tiefgehenden Insights in unseren Daten erkennen ohne gute Data-Governance-Praxis.Laura Madsen zeigt Ihnen, wie Sie Governance für das neue Zeitalter neu definieren können. Mit einem lockeren, witzigen Stil tappt Madsen in Ihre jahrzehntelangen Erfahrungen, präsentiert Interviews mit erstklassigen Experten, und begründet Ihre Perspektiven mit Forschungsergebnissen. Seien Sie Zeuge, wie alles zusammengebrochen ist, stellen Sie traditionelle Überzeugungen in Frage, und lassen Sie sich auf eine grundlegende Neueinstellung ein: dass es bei Governance nicht um das Stoppen oder das Vermeiden von Datennutzung geht, sondern um die Unterstützung der Datennutzung. Sie werden es schaffen, Vertrauen und Wert zu Ihren Data-Governance-Funktionen zurückzubringen, und werden über folgendes lernen: - Menschlich-orientierter Ansatz für Governance- Prozesse, die den Datentsunami unterstützen- Moderne Technologien, die Data Governance ermöglichen

  • av Steve Hoberman
    551,-

    The Align > Refine > Design series covers conceptual, logical, and physical data modeling (schema design and patterns) for leading technologies, combining proven data modeling practices with database-specific features to produce better applications. Read Cassandra Data Modeling and Schema Design if you are a data professional who needs to expand your modeling skills to include Cassandra or a technologist who knows Cassandra but needs to grow your schema design skills.The book's introduction and three chapters cover the Align, Refine, and Design approach. We include what the level does in the name by rebranding Conceptual, Logical, and Physical into Align, Refine, and Design. The introduction covers the three modeling characteristics of precise, minimal, and visual; the three model components of entities, relationships, and attributes (including keys); the three model levels of conceptual (align), logical (refine), and physical (design); and the three modeling perspectives of relational, dimensional, and query. Chapter 1, Align, is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Chapter 2, Refine, is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Chapter 3, Design, is about the technical requirements. That is, designing to accommodate our model's unique software and hardware needs.Align, Refine, and Design-that's the approach followed in this book and reinforced through an animal shelter case study. If you are interested in learning how to build multiple database solutions, read all the books in the Align > Refine > Design series. Since each book uses the same template, you can quickly skill up on additional database technologies.

  • av Robert Seiner
    612,-

    Gouvernance non intrusive des données frappe encore offre un mélange de 50 leçons applicables et de perspectives acquises au cours d'années à aider des organisations du monde entier à suivre l'approche non intrusive, popularisée par le best-seller Gouvernance non intrusive des données.La gouvernance non intrusive des données (GNID) ne signifie pas que la gouvernance des données et des informations sera facile ou sans épreuves. Non intrusif ne signifie pas non plus que le programme n'aura pas d'impact. Le titre de ce livre est une dichotomie : obtenir des résultats mesurables en étant moins menaçant, alors que l'impact de cette approche peut être énorme. Dans la GNID, l'accent est mis sur l'exploitation de le redevabilité existante tout en examinant les possibilités d'amélioration. Ce livre présente un nouveau cadre pour la GNID, qui envisage les principaux éléments d'un programme réussi à travers différentes perspectives de l'organisation.

  • av Lambert Hogenhout
    551,-

    Apply a step-by-step approach to develop your organization's global data privacy strategy. Data is everywhere. Organizations continuously use data in new ways, often generating cross-border data flows. At the same time, concern about the use of personal data is growing. Every year, more countries adopt data privacy laws and our expectations increase on how companies respect our private data. A data privacy strategy is no longer just about compliance-it is good business. A clear and effective data privacy program can build customer trust and strengthen a brand's reputation. We cover the art of crafting an effective data privacy strategy that aligns with business objectives and brand positioning yet ensures compliance with relevant laws. Gain a foundational understanding of data privacy issues as a prerequisite to developing a custom strategy. Use our review of the major legislations around the world to guide you in creating a data privacy strategy. Benefit from our insights on the relation between data privacy programs and a data strategy, an IT strategy, and risk management frameworks. Be able to apply methodologies to help you stay on track, such as Privacy by Design and data minimization. Incorporate the cultural and ethical considerations of data privacy across different countries where you may operate. Know how emerging privacy enhancing technologies (PETs) can be powerful tools in implementing your strategy, and pinpoint the intersection between data privacy and AI.The stakes for data privacy have never been higher and this book will help you up your game.

  • av Jeff Harris
    734,-

    Manage and optimize metadata using Artificial Intelligence (AI) and Machine Learning (ML) through this comprehensive guide on the intricate and pivotal world of data cataloging. The book demystifies the concepts of data cataloging, highlighting its critical role in ensuring that data within organizations is accurate, accessible, and actionable. Jeff meticulously lays out strategies and insights on creating a robust data catalog that manages metadata and uses AI and ML to enhance its usability and reliability.In an era dominated by data-driven decisions, understanding and implementing effective data cataloging has become paramount for businesses and organizations across the globe. Jeff navigates through the complexities of data cataloging, providing readers with practical insights, actionable strategies, and a thorough understanding of utilizing AI and ML to enhance metadata management. The book is a doorway to understanding and implementing a fundamental component that ensures the reliability and accessibility of your data, enabling informed decision-making and data-driven strategies.This book is for data professionals, IT experts, business analysts, and organizational leaders who need a foundational and advanced understanding of data cataloging. Through real-world examples, case studies, and a step-by-step guide on implementing the concepts discussed, Jeff ensures that the reader gains the knowledge and tools needed to navigate the complexities of data cataloging. His insights on leveraging AI and ML for metadata management provide a futuristic perspective and offer practical strategies that organizations can implement to enhance their data management practices.By embracing the book's principles, you can navigate the vast and often confusing world of data management with clarity and precision. This book will guide you through creating, managing, and optimizing a data catalog that serves as the backbone of your data management strategy. This book is an investment towards understanding, implementing, and mastering data cataloging, ensuring that your data is not merely stored but is optimized, reliable, and ready to drive your strategic initiatives forward.For anyone seeking to harness the power of their data, ensure its reliability, and utilize it to drive informed decisions and strategies, this indispensable guide will navigate you through the complexities and opportunities present in the world of data cataloging, ensuring that you are well-equipped to create a robust, reliable, and optimized data management strategy.

  • av Steve Hoberman
    551,-

    The Align > Refine > Design series covers conceptual, logical, and physical data modeling (schema design and patterns) for leading technologies, combining proven data modeling practices with database-specific features to produce better applications. Read TerminusDB Data Modeling and Schema Design if you are a data professional who needs to expand your modeling skills to include TerminusDB or a technologist who knows TerminusDB but needs to grow your schema design skills.The book's introduction and three chapters cover the Align, Refine, and Design approach. We include what the level does in the name by rebranding Conceptual, Logical, and Physical into Align, Refine, and Design. The introduction covers the three modeling characteristics of precise, minimal, and visual; the three model components of entities, relationships, and attributes (including keys); the three model levels of conceptual (align), logical (refine), and physical (design); and the three modeling perspectives of relational, dimensional, and query. Chapter 1, Align, is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Chapter 2, Refine, is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Chapter 3, Design, is about the technical requirements. That is, designing to accommodate our model's unique software and hardware needs.Align, Refine, and Design-that's the approach followed in this book and reinforced through an animal shelter case study. If you are interested in learning how to build multiple database solutions, read all the books in the Align > Refine > Design series. Since each book uses the same template, you can quickly skill up on additional database technologies.

  • av Marilu Lopez
    551,-

    Strategy is the key to winning any game, and Data Strategy is the key to winning the Data Management game. Data Strategy is a concept in the plural, as it comprises a set of different interrelated strategies that work together to meet business strategic objectives and business data needs and solve an organization's data pain points. Data Strategies help manage expectations of what is happening in the data arena. Describing them is an art. Thus, a canvas is needed to capture and communicate them. This book shows you what those constituent strategies should be and what goes into their canvases. It is a comprehensive guide that will take you step by step on how to produce an Enterprise Level Data Strategy. If you follow this advice, you will have a practical, agile, and easy-to-communicate (PAC) Data Strategy to keep your teams and organization aligned. We all know that data will play an increasingly important role in all future operations. How you personally and your organization apply data to operations will be the difference between success and failure. Have worked in this area for many years, the time is ripe for the next step in the evolution of data strategy. Marilu has taken this step. As an enthusiastic adherent to the business canvas technique, I greatly admire the diligence with which she has applied this to the concept of formulating a data strategy. There is a wealth of material in this very dense book of extremely useful tips, techniques, and guidance. Probably the most difficult aspect of data strategy formulation centers on the challenge of meaningfully engaging various stakeholders in the necessary dialogs required to take your organizations data and applying it meaningfully in support of the organizational strategy. The method described provides all the guidance you will need.Peter AikenPresident, DAMA InternationalThis is a comprehensive book on data strategies. Marilu Lopez has come up with a way to connect the data strategy to executive management that until now has been a missing piece; how to set the data strategies in motion. She doesn't just go into how to create a data strategy as if it were a cake to be baked. Instead, she makes the reader think about what kind of data strategies are needed and how they fit into your organization's challenges, intentions, and aspirations. The content is very credible as Marilu Lopez consistently uses and relates to academic research, literature, and thought leaders' experiences. Thus, her PAC framework is based on today's accumulated knowledge and takes you further from there. If you are about to take on data strategy work, you should start by first checking out Marilu Lopez's messages.Håkan EdvinssonCTO, Principal Consultant, Informed DecisionsToday's college graduates think that technology is all about choosing a technology or a technology stack in order to get work done. They don't see that there is a larger infrastructure that they are a part of. What is needed is a book on the larger concepts that shape the IT industry. I recommend the book by Marilu Lopez as a starting place for understanding the larger framework under which IT operates.Bill InmonCEO of Forest Rim Technology

  • av Chris Date
    551,-

    Keys and foreign keys play a crucial role in relational databases-keys identify the objects of interest, and foreign keys knit those objects together. The basic idea couldn't be simpler. As so often, however, the devil is in the detail ... The fact is, these concepts aren't quite as straightforward as they might seem on first acquaintance-or, at least, such would appear to be the case, if the literature is anything to go by. In this one of a kind book, noted database author C. J. Date traces the somewhat checkered history of the key and foreign key concepts, shedding some light on what turns out to be, on occasion, a surprisingly murky subject and explaining in detail what proper support should look like in true relational products. Topics covered include a detailed look at the pertinent theory; a critical review of the historical development of these ideas; and a couple of important case studies, one having to do with the SQL standard and one with the IBM DB2 product family. No serious database professional can afford to be without this book.

  • av Steve Hoberman
    551,-

    The Align > Refine > Design series covers conceptual, logical, and physical data modeling (schema design and patterns) for leading technologies, combining proven data modeling practices with database-specific features to produce better applications. Read Elasticsearch Data Modeling and Schema Design if you are a data professional who needs to expand your modeling skills to include Elasticsearch or a technologist who knows Elasticsearch but needs to grow your schema design skills.The book's introduction and three chapters cover the Align, Refine, and Design approach. We include what the level does in the name by rebranding Conceptual, Logical, and Physical into Align, Refine, and Design. The introduction covers the three modeling characteristics of precise, minimal, and visual; the three model components of entities, relationships, and attributes (including keys); the three model levels of conceptual (align), logical (refine), and physical (design); and the three modeling perspectives of relational, dimensional, and query. Chapter 1, Align, is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Chapter 2, Refine, is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Chapter 3, Design, is about the technical requirements. That is, designing to accommodate our model's unique software and hardware needs.Align, Refine, and Design-that's the approach followed in this book and reinforced through an animal shelter case study. If you are interested in learning how to build multiple database solutions, read all the books in the Align > Refine > Design series. Since each book uses the same template, you can quickly skill up on additional database technologies.

  • av Merrill Albert
    453,-

    Read true crime stories of people abusing and misusing data. Just because there's data doesn't necessarily mean it's the right data or people are using it properly. And as the saying goes, those who do not learn history are doomed to repeat it. Documenting and understanding these stories can prevent them from happening again. Through a career in data, Merrill sees data problems (data crimes) caused by people treating data improperly. Recognizing data crimes is the first step in getting a resolution. This book details incidents of data crimes and their impact. Although not the person who created the data crime, Merrill can analyze what happened and propose a solution. For people working in data, understanding data crimes and preventing them is essential to your organization. This book gives you specific stories you can share to explain the importance of getting the data right. For people being potential victims of data crimes, awareness will help you steer clear of data crimes, or interact with the offender to come to a resolution. You might finally understand why companies keep making mistakes.

  • av Bruno Freitag
    588,-

    Design and implement a data lakehouse using technology-driven simplifications and generalizations. ¿The approach you will learn enables consolidating even incoherent data from multiple source systems across complex enterprise environments. The precise business question does not need to be known in advance and can even change over time. The approach lends itself well to federated, cooperating data mesh nodes. The individual components, called mini-marts, are like the "data part" of a data quantum and are interoperable. We describe data model blueprints to generalize dimensions with synonyms and facts at different granularities. Includes code examples using complex hierarchies as they exist in heterogenous real-world go-to-market organizations.

  • av Bill Inmon
    551,-

    The data lakehouse is the next generation of the data warehouse and data lake, designed to meet today's complex and ever-changing modern information systems. This book shows you how to construct your data lakehouse as the foundation for your artificial intelligence (AI), machine learning (ML), and data mesh initiatives. Know the pitfalls and techniques for maximizing business value of your data lakehouse.In addition, be able to explain the core characteristics and critical success factors of a data lakehouse. By reviewing entry errors, key incompatibility, and ensuring good documentation, we can improve the data quality and believability of your lakehouse. Evaluate criteria for data quality, including accuracy, completeness, reliability, relevance, and timeliness. Understand the different types of storage for the lakehouse, including the under-utilized yet extremely valuable bulk storage. There are three data types in the data lakehouse (structured, textual, and analog/ IoT), and for each, learn how to build a robust foundation for artificial intelligence (AI), machine learning (ML), and data mesh. Leverage data models for structured data, ontologies and taxonomies for textual data, and distillation algorithms for analog/IoT data. Learn how to abstract these data types to accommodate future requirements and simplify data lineage. Apply Extract, Transform, and Load (ETL) to create a structure that returns the answers to business problems. The end result is a data lakehouse that meets our needs. Speaking of human needs, learn Maslow's Hierarchy of Data Lakehouse Needs. Next explore data integration geared for Al, ML, and data mesh. Then deep dive with us into all of the varieties of analytics within the lakehouse, including structured, textual, and analog analytics. Witness how descriptive data, data catalog, and metadata can increase the value of the lakehouse. We conclude with a detailed evolution of data architecture, from magnetic tape to the data lakehouse as a bedrock foundation for AI, ML, and data mesh.

  • av Barry Devlin
    514,-

    Let the sun shine through! The cloud in data warehousing skies is finally clearing as Dr. Barry Devlin builds the architectural and systems foundations for data lakehouse, data fabric, and data mesh, as well as the base cloud data warehouse.The past five years has seen an explosion of innovation and new technical forms as cloud data warehousing has gone mainstream. But confusion has grown too. After all, the business needs are largely unchanged. So, why are there so many options and approaches? How do they differ? Which one may be the best choice? And why?In this first volume of a two-part series, Dr. Barry Devlin-a founder of the entire data warehousing industry-offers initial answers these questions. Drawing lessons from the long history of data warehousing, he defines an all-embracing architecture and draws specific architectural design patterns for each of these modern approaches. And he discusses the various choices and paths from current systems to the different cloud solutions.Volume II expands further on the architectural considerations and offers deeper dives into cloud data warehouse, data fabric, data lakehouse, and data mesh. It also offers an independent view of their strengths and weaknesses.

  • av Daniel Coupal
    551,-

    The Align > Refine > Design series covers conceptual, logical, and physical data modeling (schema design and patterns) for leading technologies, combining proven data modeling practices with database-specific features to produce better applications. Read MongoDB Data Modeling and Schema Design if you are a data professional who needs to expand your modeling skills to include MongoDB or a technologist who knows MongoDB but needs to grow your schema design skills.The book's introduction and three chapters cover the Align, Refine, and Design approach. We include what the level does in the name by rebranding Conceptual, Logical, and Physical into Align, Refine, and Design. The introduction covers the three modeling characteristics of precise, minimal, and visual; the three model components of entities, relationships, and attributes (including keys); the three model levels of conceptual (align), logical (refine), and physical (design); and the three modeling perspectives of relational, dimensional, and query. Chapter 1, Align, is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Chapter 2, Refine, is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Chapter 3, Design, is about the technical requirements. That is, designing to accommodate our model's unique software and hardware needs.Align, Refine, and Design-that's the approach followed in this book and reinforced through an animal shelter case study. If you are interested in learning how to build multiple database solutions, read all the books in the Align > Refine > Design series. Since each book uses the same template, you can quickly skill up on additional database technologies.

  • av Steve Hoberman
    551,-

    The Align > Refine > Design series covers conceptual, logical, and physical data modeling (schema design and patterns) for leading technologies, combining proven data modeling practices with database-specific features to produce better applications. Read Neo4j Data Modeling if you are a data professional who needs to expand your modeling skills to include Neo4j or a technologist who knows Neo4j but needs to grow your schema design skills.The book's introduction and three chapters cover the Align, Refine, and Design approach. We include what the level does in the name by rebranding Conceptual, Logical, and Physical into Align, Refine, and Design. The introduction covers the three modeling characteristics of precise, minimal, and visual; the three model components of entities, relationships, and attributes (including keys); the three model levels of conceptual (align), logical (refine), and physical (design); and the three modeling perspectives of relational, dimensional, and query. Chapter 1, Align, is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Chapter 2, Refine, is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Chapter 3, Design, is about the technical requirements. That is, designing to accommodate our model's unique software and hardware needs.Align, Refine, and Design-that's the approach followed in this book and reinforced through an animal shelter case study. If you are interested in learning how to build multiple database solutions, read all the books in the Align > Refine > Design series. Since each book uses the same template, you can quickly skill up on additional database technologies.

  • av Laura Madsen
    453,-

    El gobierno de datos está roto. Es hora de arreglarlo.¿Por qué es tan ineficiente el gobierno de datos? La verdad es que los programas de gobierno de datos no están diseñados para la forma en que gestionamos nuestros equipos de datos, ni siquiera están diseñados para una organización moderna en absoluto. Fueron diseñados cuando los reportes todavía se enviaban por correo interno.El flujo de datos dentro y fuera de las organizaciones de hoy es como un tsunami que rompe los métodos rígidos de gobierno de datos. Sin embargo, nuestros programas todavía dependen de ese enfoque de comando y control. ¿Alguna vez ha intentado controlar un tsunami? Cada organización que utiliza datos sabe que necesita un programa de gobierno de datos. Los esfuerzos de data literacy y la legislación como el GDPR se han convertido en referentes para nuestras funciones de gobernanza. Pero aun así, nos sentamos en reuniones de gobierno de datos sin suficiente gente y con demasiadas preguntas para poder avanzar. No hay agilidad en el programa porque implicamos un grado de fragilidad en los datos que no existe. Seguimos insistiendo en métodos arcaicos que no aportan valor a nuestras organizaciones. No se puede obtener un conocimiento a profundidad de los datos sin buenas prácticas de gobernanza.Laura Madsen le muestra cómo redefinir la gobernanza para la era moderna. Con un estilo casual e ingenioso, Madsen se basa en décadas de experiencia, comparte entrevistas con otros expertos destacados en el campo y fundamenta su perspectiva en la investigación. Observa dónde todo se desmoronó, desafía creencias arraigadas y propone un cambio fundamental: que la gobernanza no se trata de detener o prevenir el uso, sino de apoyar el uso de los datos. Sea capaz de devolver la confianza y el valor a sus funciones de gobierno de datos y aprenda:El enfoque impulsado por las personas para la gobernanzaLos procesos que respaldan el tsunami de datosLa tecnología de vanguardia que permite el gobierno de datos.

  • av Robert Seiner
    502,-

    Non-Invasive Data Governance Strikes Again provides a blend of 50 applicable lessons learned and perspectives gained from years of assisting organizations worldwide to follow the popular non-invasive approach from the bestseller, Non-Invasive Data Governance.Non-Invasive Data Governance (NIDG) does not mean that the governance of data and information will be easy or without trials and tribulations. Non-invasive does not mean that the program will be low impact. The title of this book is a dichotomy: achieving measurable results by being less threatening while the impact of striking with this approach can be huge. NIDG focuses on leveraging existing levels of accountability while addressing opportunities to improve. Read about a new framework for Non-Invasive Data Governance in this book that views the main components of a successful program through different levels and perspectives of the organization.There is not a more prolific source of original ideas in the data governance community than Bob Seiner. He's in the field every day, working alongside his clients to help them address real business opportunities and problems. So, when Bob shares "experience and perspective" in his new book, you are tapping into literally thousands of hours of hard work and creative thinking which have been applied in the real world.Tony Shaw CEO and Founder, DATAVERSITYIn this sequel to Non-Invasive Data Governance, Bob Seiner uncovers, unwraps, exposes, defrags, and dissects the lessons and the questions he has discovered in the application of his groundbreaking methodologies for data governance. The reader will be well prepared with examples of how to manage, steward, measure, and to become the best Data Governance professional with the least possible friction.Michelle Finneran Dennedy Chief Executive Officer, PrivacyCode, Inc. and Partner, Privatus Strategic ConsultingBob's invention of the Non-Invasive Data Governance methodology has helped to revolutionize the way organization's manage data. And this book shares the lessons he has learned putting NIDG to practice. So, if you are looking for guidance on governing your organization's data without requiring significant changes to existing processes or infrastructure, let Bob's latest book be your guide.Craig Mullins President & Principal Consultant, Mullins Consulting, Inc. Too often, data books present a rose-colored lens of how things should be. Bob Seiner certainly has the pedigree to know what needs to change, but he also has the practical experience to make it happen. This book is a key that can help anyone unlock the knowledge that Bob has accumulated the hard way, from the experience of many real-life data governance implementations. Everyone should add this to their data bookshelf, since no matter the scenario, there is likely to be a chapter or essay that will apply and help improve the situation. I can't imagine a better encore to Bob's original Non-Invasive Data Governance book! Anthony Algmin Founder, Algmin Data Leadership

  • av Mark Atkins
    673,-

    Gain confidence in regulatory compliance, drive effective infotech investment, and uplift organizational knowledge for effective data governance.Based on the authors' extensive experience and reinforced with case studies, this book presents their award-winning framework of business tools and techniques. It provides a 4-point strategy for achieving business-driven information governance aligned with business risk management to ensure the delivery of quality and reliable information.Master a structured approach to sharing knowledge and defining business terms by removing ambiguous terminology from business communications. Raise awareness of cross-organizational misalignment and improve communication through a clearly defined business language. Use the knowledge capture technique to establish and strengthen responsibility for business information, and empower dynamic communities focused on resolving business concerns. Improve communication with data scientists and engineers on business needs, ensuring a higher return on infotech investment and reliable information in reports and dashboards. Expand business capability for business-side information governance over key information artifacts and related data. The four-point strategy culminates in establishing a governed business encyclopedia of organizational knowledge, including a glossary of business terms and definitions, an open register for issues, and how these all relate with key artifacts, including dashboards, policies, operating procedures, and data sets.Foreword by John Stanhope AM, Chairman of Port of Melbourne, Chairman of Bionics Institute, and Chancellor of Deakin University. "All organisations, whether they are relatively new start-ups or century-old companies, face the seemingly daunting task of defining, organising and governing their business information. In this book, Terry and Mark share their significant experience of helping organisations solve real-world problems, repositioning business information from a problem to be solved to a competitive knowledge asset. The authors present a simple but effective approach and is a timely reminder that sometimes you need to slow down and spend time on the basics to ultimately go fast." Kate Koch, CFO, SEEK Australia "This book cuts through the complex issues of information management with real-world experience and case studies. I highly recommend it."Thana Velummylum, ex CDO of Telstra "The only Information Management book that has motivated me to keep reading. The authors have reflected on their long experience, developed sound practical techniques, and described them in a clear readable manner."Graham Witt, author of Data Modeling Essentials, Writing Effective Business Rules, Data Modeling for Quality, and Technical Writing for Quality

  • av Scott Taylor
    429,-

    Le guide pratique du Chuchoteur de données pour expliquer et comprendre la valeur stratégique de la gestion des données. Le POURQUOI, pas le COMMENT.Le besoin de gestion des données est partout dans votre entreprise. La valeur de chaque initiative de transformation numérique, de chaque projet de science des données et d'analyse, de chaque offre de service en ligne, de chaque incursion dans le commerce électronique et de chaque mise en ¿uvre de logiciels d'entreprise est inextricablement liée à la réussite des efforts de gestion des données. Bien qu'il s'agisse d'une simple fonction de "ordures à l'entrée, ordures à la sortie", ce slogan est rarement à l'origine d'une action durable. Nous devons raconter une meilleure histoire de données.Le narration des données est probablement la tendance non technique la plus présente dans la technologie. Mais elle ne soutient pas directement la gestion des données, car elle est axée sur comment raconter des histoires avec des données. Il est donc temps d'élargir le domaine du récit des données pour reconnaître le rôle de la gestion des données en racontant des histoires à propos des données.Apprenez comment obtenir l'implication des parties prenantes et l'engagement de la direction pour financer et soutenir la gestion des données en tant que partie systématique, cohérente et fondamentale de vos activités.

  • av Chris Date
    612,-

    A set in mathematics is just a collection of elements; an example is the set of natural numbers {1, 2, 3, ...}. Simplifying somewhat, the theory of sets can be regarded as the foundation on which the whole of mathematics is built; and the founder of set theory is the German logician and mathematician Georg Cantor (1845¿1918). However, the aspect of Cantor's work that's most widely known-or most controversial, at any rate-isn't so much set theory in general, but rather those parts of that theory that have to do with infinite sets in particular. Cantor claimed among other things that the infinite set of real numbers contains strictly more elements than the infinite set of natural numbers. From this result, he concluded that there's more than one kind of infinity; in fact, he claimed that there are an infinite number of different infinities, or transfinite numbers. (He also believed these results had been communicated to him by God.) The aim of this book is to explain and investigate these claims of Cantor's in depth (and question them, where appropriate). It's not a textbook, though; instead, it's a popular account-it tells a story-and the target audience is interested lay readers, not mathematicians or logicians. What little mathematics is needed to understand the story is explained in the book itself.

  • av Bill Inmon
    612,-

    The data lakehouse is the next generation of the data warehouse and data lake, designed to meet today's complex and ever-changing analytics, machine learning, and data science requirements.Learn about the features and architecture of the data lakehouse, along with its powerful analytical infrastructure. Appreciate how the universal common connector blends structured, textual, analog, and IoT data. Maintain the lakehouse for future generations through Data Lakehouse Housekeeping and Data Future-proofing. Incorporate data catalogs, data lineage tools, and open source software into your architecture to ensure your data scientists, analysts, and end users live happily ever after. Deep dive into one specific implementation of a data lakehouse: the Databricks Lakehouse Platform.

  • av Graham Witt
    673,-

    Sooner or later many technical professionals need to write-or contribute to-documents, such as requirements specifications, user manuals, and standards documents.A major contribution to the success of any project is the quality of the documentation.This book will assist anyone who wants to communicate more effectively when writing in the English language, in the US, the UK, the EU, or the British Commonwealth.It discusses what makes written communication effective, and-more to the point-what makes it ineffective. It contains practical advice for the technical writer, covering choice of words, arrangement into sentences, document organization, and layout. It contains numerous examples of both well-formed and poorly-formed statements.

  • av Robert Seiner
    636,-

  • av Kathy Rondon
    629,-

  • av Bill Inmon, Patty Haines & David Rapien
    491,-

  • av DAMA International
    973,-

  • av Zacharias Voulgaris
    629,-

    Become proficient in using heuristics within the data science pipeline to produce higher quality results in less time.Although data professionals have used heuristics for many years within optimization-related applications, heuristics have been a vibrant area of research in various data-related areas, from machine learning to image processing. Heuristics also play a role in niche applications such as cybersecurity. In addition, the advent of AI and other data-driven methodologies have brought heuristics to the forefront of data-related work.In this book, we explore heuristics from a practical perspective. We illustrate how heuristics can help you solve challenging problems through simple examples and real-life situations. Apply Jaccard Similarity and a variant, F1 score, Entropy, Ectropy, Area Under Curve, Particle Swarm Optimization, and Genetic Algorithms (along with GA variants). Beyond just exhibiting the various known and lesser-known heuristics available today, we also examine how you can go about creating your own through a simple and functional framework. Code notebooks enable you to practice all of the techniques and explore a few of your own.There is no doubt that the data-driven paradigm is here to stay. There are many ways to stand out in it as a data professional, with AI-related know-how being at the top of the list. However, equally impactful can be the creative tools (heuristics) that make such technologies feasible and scalable. Unfortunately, this is a way that not many people care to follow as it's off the beaten path. Are you up for the challenge?

  • av Robert S Seiner
    636,-

Gjør som tusenvis av andre bokelskere

Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.