This Springer Brief presents a basic algorithm that provides a correct solution to finding an optimal state change attempt, as well as an enhanced algorithm that is built on top of the well-known trie data structure. It explores correctness and algorithmic complexity results for both algorithms and experiments comparing their performance on both real-world and synthetic data. Topics addressed include optimal state change attempts, state change effectiveness, different kind of effect estimators, planning under uncertainty and experimental evaluation. These topics will help researchers analyze tabular data, even if the data contains states (of the world) and events (taken by an agent) whose effects are not well understood. Event DBs are omnipresent in the social sciences and may include diverse scenarios from political events and the state of a country to education-related actions and their effects on a school system. With a wide range of applications in computer science and the social sciences, the information in this Springer Brief is valuable for professionals and researchers dealing with tabular data, artificial intelligence and data mining. The applications are also useful for advanced-level students of computer science.
Martin Schymanietz explores dynamic capabilities that help organizations to cope with the challenges and chances of the utilization of data for service provision. Data-driven service innovation provides a fruitful pathway for organizations to extend their current offerings, deepen customer relationships and increase revenues. He examines the nature of data-driven service innovation, accompanied challenges and identifies relevant actors and their roles on an individual level. This approach helps organizations to develop dynamic capabilities based on individual actors that in sum shape the whole organization. Contents From the resource-based view to dynamic capabilities Dynamic capabilities for service innovation Introducing data-driven service innovation Identifying actors and challenges Exploring actor roles and capabilities Towards a dynamic capability framework for data-driven service innovation Target Groups Lecturers and students of business administration, business informatics, industrial engineering, management, innovation management Experts in management, innovation management, and R&D The Author Dr. Martin Schymanietz is a postdoctoral researcher at the Friedrich-Alexander University Erlangen-Nürnberg with a focus on the innovation of data-driven service and its characteristics. He received his PhD in economic sciences from Prof. Dr. Kathrin M. Möslein at the Department of Information Systems, Chair of Information Systems - Innovation and Value Creation.
Data driven methods have long been used in Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) synthesis and have more recently been introduced for dialogue management, spoken language understanding, and Natural Language Generation. Machine learning is now present “end-to-end” in Spoken Dialogue Systems (SDS). However, these techniques require data collection and annotation campaigns, which can be time-consuming and expensive, as well as dataset expansion by simulation. In this book, we provide an overview of the current state of the field and of recent advances, with a specific focus on adaptivity.
Data deficiencies contribute to state fragility and exacerbate fragile states’ already limited capacity to provide basic services, public security and rule of law. The lack of robust, good quality data can also have a disabling effect on government efforts to manage political conflict, and indeed can worsen conflict, since violent settings pose substantial challenges to knowledge generation, capture and application. In short, in fragile contexts the need for reliable evidence at all levels is perhaps greater than anywhere else. The development of sustainable and professional ‘data-literate’ stakeholders who are able to produce and increase the quality and accessibility of official statistics can contribute to improved development outcomes. Good quality and reliable statistics are also required to track the progress of development policies through the monitoring of performance indicators and targets and to ensure that public resources are achieving results. While data alone cannot have a transformative effect without the right contextual incentives it is an essential and necessary prerequisite for greater accountability and more efficient decision-making. This volume explores methods and insights for data collection and use in fragile contexts, with a focus on Sudan. It begins by posing several questions on the political economy of data, and then sets out a framework for assessing the validity, reliability, and potential impact of data on decision-making in a fragile country. It also sets out insights on challenges associated with fragile states, derived from recent data collected in Sudan: the 2014/2015 DFID Sudan household survey. This includes data-driven analysis of topics including female genital mutilation, public service delivery, and the interplay of governance, service quality, and state legitimacy.
Vols. for 1973- include the following subject areas: Biological sciences, Agriculture, Chemistry, Environmental sciences, Health sciences, Engineering, Mathematics and statistics, Earth sciences, Physics, Education, Psychology, Sociology, Anthropology, History, Law & political science, Business & economics, Geography & regional planning, Language & literature, Fine arts, Library & information science, Mass communications, Music, Philosophy and Religion.
This volume presents a perspective on programs and policies designed to enhance the development of local and regional economics. The primary purpose is to provide an overview to assist economic development organizations in implementing competitive positioning programs and utilize IRM (information resources management) techniques. The first three chapters build the case for an information resources management (IRM) perspective on economic development. Chapter Four rounds out the discussion by proposing broad-based academic and governmental endeavors based on a perspective on economic development that is consonant with a rapidly changing, highly interdependent, global economy in today's Information Age.
This study addresses the question: "To what extent are teacher assigned subject specific grades useful for data driven decision making in schools?" Recently, schools have been urged to bring teachers and school leaders together around student-level data in an effort to increase dialogue, collaboration and professional communities to improve educational practice through data driven decision making. However, schools are inundated with data. While much attention has been paid to the use and reporting of standardized test scores in policy, school and district-level data driven decision making, much of the industry of schools is devoted to the generation and reporting of grades. Historically, little attention has been paid to student grades and grade patterns and their use in predicting student performance, standardized assessment scores and on-time graduation. This study analyzed the entire K-12 subject-specific grading and assessment histories of two cohorts in two separate school districts through correlations and a novel application of cluster analysis. Results suggest that longitudinal K-12 grading histories are useful. Grades and standardized assessments appear to be converging over time for one of the two school districts studied, suggesting that for one of the districts but not the other, current accountability policies and state curriculum frameworks may be pushing into classrooms and modifying teacher's daily practice, as measured through an increasing correlation of grades and standardized assessments. Moreover, using cluster analysis, K-12 subject specific grading patterns appear to show that early elementary school grade patterns predict future student grade patterns as well as qualitative student outcomes, such as on-time graduation. The findings of this study also suggest that K-12 subject specific grade patterning using cluster analysis is an advance over past methods of predicting students at-risk of dropping out of school. Additionally, the evidence supports a finding that grades may be an assessment of both academic knowledge and a student's ability to negotiate the social processes of school. A bibliography is included. (Contains 20 tables and 28 figures.).
Examines policies that focus on the substance of environmental statutes, how they are translated into regulations, and on the factors that affect how they influence real-world behavior. This book offers teaching and study aids, real-world-based problems and questions, pathfinders explaining where to locate crucial source materials, and more.
The commodification of computing, sensors, actuators, data storage and algorithms has unleashed a new wave of automation throughout society. Motivated by the promise of new capabilities, quality improvements, or efficiency gains, data-driven technologies have captured the attention and imagination of the public and many domain experts. Though opportunities are ample, the rapid introduction of data-driven functionality also triggers well-founded concerns about safeguarding critical values, such as safety, privacy and justice. In the context of operating electric distribution networks, the need for data-driven monitoring and control is explained by the irreversible transition from fossil to renewable generation and the accompanied electrification of our economy in areas like transportation and heating. The traditional fit-and-forget paradigm of designing networks conservatively for the projected peak loads assumed unidirectional power flow, predictable future demand and monotonic voltage drops, and allowed for operating at near-100\% reliability with minimal requirement for sensing and actuation. The intermittent nature of Distributed Generation (DG), its ability to feed power back to the grid and cause bidirectional power flow, and the diversifying and nonlinear behavior of electric loads are all eating away at the robustness of this approach, causing Distribution System Operators (DSOs) to put caps on the allowable DG and revisit their design and operating practice. Rather than making traditional expensive network reinforcements in often aging physical infrastructures, DSOs are trying to increase the observability and controllability of their networks by leveraging new sensing and actuation technologies and exploring the ability to use data-driven algorithms to help with the integration of more DG in a more distributed (in space and time) and cost-effective way. This dissertation works towards this vision by formulating a systematic control-theoretic approach for integrating data-driven monitoring and control in the operation of electric distribution networks. Firstly, a Bayesian approach to state estimation overcomes the constraint of limited available real-time sensors by integrating voltage forecasting. A second class of tools discussed is the use of machine learning to decentralize Optimal Power Flow (OPF) methods, by utilizing inverter-interfaced Distributed Energy Resources (DERs). The Decentralized OPF method lets each DER learn a policy that contributes to network objectives from its local historical data and measurements alone. This approach is formulated as a compression and reconstruction problem through an information-theoretic lens, providing fundamental limits of reconstruction and a strategy for optimal communication to improve learning-based reconstruction of optimal policies throughout a network. Lastly, the ambition to control networks in a distributed fashion triggers concerns about privacy-sensitive information that may be inferred from an agent's shared data. For a general class of algorithms, a new notion of local differential privacy is integrated that allows each agent to customize the protection of local information captured in constraints and objective functions. The ultimate goal of the work presented in this dissertation is to contribute to a framework for the integral and value-sensitive design and implementation of data-driven methodologies in critical infrastructure. To address the inherent cross-disciplinary nature of this larger goal, the final chapter explains how each automated decision-making tool reflects and affects values important to its stakeholders. The chapter argues that in order to enable beneficial integration of such tools, practitioners need to reflect on their epistemology and situate the design of automated decision-making in its inherently dynamic and human context.