trustbusting & federated learning
https://techxplore.com/news/2019-06-infusing-machine-inductive-biases-capture.html
Over the past couple of years, researchers have developed theoretical models aimed at explaining decision-making, as well as machine learning (ML) models that try to predict human behavior. Despite the achievements associated with some of these models, accurately predicting human decisions has remained a significant research challenge. Now, researchers at the University of California, Berkeley and Princeton University have come up with a mechanism to translate insights from psychological theories into inductive biases within a machine learning model. The approach, apart from getting the community closer to modeling and predicting human behavior will also encourage greater collaboration between the machine learning and behavioral science communities.
https://devclass.com/2019/06/04/mckinsey-flips-lid-open-sources-kedro-machine-learning-framework/
Management consultancy McKinsey has made its first foray into the open source world, offering up a machine learning development framework developed at its QuantumBlack analytics unit. The company stated that the key area the offering tackles is the low “production readiness” of machine learning projects across organizations as the code bases “often don’t have the application of good software engineering principles. McKinsey’s latest offering is data agnostic, provides rich pipeline visualisation capabilities and comes loaded with versioning, reproducibility, and the ability to log what’s happening in your pipeline.
Instead of gathering data in the cloud from users to train data sets, federated learning trains AI models on mobile devices in large batches, then transfers those learnings back to a global model without the need for data to leave the device. In March, Google released TensorFlow Federated to make federated learning easier to perform with its popular machine learning framework.As part of the latest release of Facebook’s popular deep learning framework PyTorch last month, the company’s AI Research group rolled out Secure and Private AI, a free two-month Udacity course on the use of methods like encrypted computation, differential privacy, and federated learning. The first course began last week and is being taught by Andrew Trask, a senior research scientist at Google’s DeepMind. He’s also the leader of Openmined, a privacy-focused open source AI community that in March released PySyft to bring PyTorch and federated learning together. Updates sent from devices can still contain some personal data or tell you about a person, and so differential privacy is used to add gaussian noise to data shared by devices, Google AI researcher Brendan McMahan said in a 2018 presentation. Distributing model training and predictions to devices instead of sharing data in the cloud also saves battery and bandwidth, since you would have to download the model on Wi-Fi, he said
https://towardsdatascience.com/pre-trained-word-embeddings-or-embedding-layer-a-dilemma-8406959fd76c
If you make a search for “comparison of different types of pre-trained word embeddings”, Google returns a lot of results - ranging from Arxiv dumps to experimental evidence. However, there is a severe lack of research that compares the performance of pre-trained word embeddings to the performance of an embedding layer . Meghdad Farahmand has put together some fantastic takeaways from his comparative observations.
https://techxplore.com/news/2019-06-implications-social-robots-religious-contexts.html
Researchers at Siegen University and Würzberg University, in Germany, have recently carried out a study investigating the user experience and acceptability associated with the use of social robots in religious contexts. Their paper, published in Springer's International Journal of Social Robotics, offers interesting insight into how people perceive blessing robots compared to other robots for more conventional purposes. The study she carried out with her colleagues, on the other hand, provides some valuable insight into how people perceive the use of robots in religious contexts. In the future, their work could inform the development of social robots for both religious and non-religious contexts, enhancing researchers' understanding of how end-users might perceive their creations.
https://towardsdatascience.com/alexa-alex-or-al-7a7e28fb4736
Tech journalist Joanna Stern explains that humans tend to construct gender for AI because humans are “social beings who relate better to things that resemble what they know, including, yes, girls and boys.” But why female? Research, such as this study by Karl MacDorman, shows that female voices are perceived to be “warmer”. Former Stanford professor Clifford Nass cites studies in his book, Wired for Speech, suggesting that most people perceive female voices as cooperative and helpful . Since the female voice tends to be perceived as more pleasant and sympathetic, people are more likely to buy such devices for the command-query tasks they perform. Therefore, the seemingly logical decision for widespread commercial adoption is to build female AI assistants . Also, our tech world is fraught with troubling trends when it comes to gender inequality. A recent UN report “I’d blush if I could” warns that embodied AIs like the primarily female voice assistants can actually reinforce harmful gender stereotypes. With the wide adoption of speaker devices in communities that do not subscribe to Western gender stereotypes, the feminization of voice assistants might help further entrench and spread gender biases. In this article, Nahua Kang suggests ways to fight gender bias in virtual assistants.
Google has built longstanding features into Drive to help surface relevant files for you, including one of the latest features: Priority in Drive. Priority is located in the upper left of Drive’s homepage. It surfaces files and suggests actions you might want to take, plus lets you create dedicated workspaces to help you stay focused. The results thus far have been promising. On average, Priority helps people find files twice as fast. When research shows that people spend nearly 20% of their time looking for internal information, that kind of time savings can make a big difference for your business . The article throws light on how Google Drive is using Artificial Intelligence to help you focus on what matters
Social Science One and Facebook this week hosted training for more than 50 independent researchers announced last month by the Social Science Research Council to study the role of social media in elections and democracy. This first-of-its-kind partnership between the academic research community and Facebook holds the potential to unlock important findings with large societal impact. At the same time, this work must be performed in a manner that protects people’s privacy. To achieve these dual goals, Facebook worked with the academic, privacy, and security communities to build a system that allows researchers to access data through a querying system that provides insights without revealing individual people’s identities. A key innovation of the development of the research tool has been to build in systems, such as differential privacy, that help provide more formal guarantees of privacy. Differential privacy is an innovative new method of adding “noise” to data sets to protect against reidentification attacks, which attempt to break conventional anonymization technique
Airbnb has been migrating its infrastructure to a Service Oriented Architecture (“SOA”). SOA offers many upsides, such as enabling developer specialization and the ability to iterate faster. However, it also poses challenges for billing and payments applications because it makes it more difficult to maintain data integrity, leading to potential issues such as double payments on a transaction. An API call to a service that makes further API calls to downstream services, where each service changes state and potentially has side effects, is equivalent to executing a complex distributed transaction. There have been conventional approaches to tackle the fundamental issues, but they all (pretty much) have some gaps. This article highlights how Airbnb built a generic idempotency framework to achieve eventual consistency and correctness across our payments micro-service architecture.
US regulators are seriously questioning whether companies like Amazon, Apple, Google, and Facebook have too much power. This new push to curb the might of Big Tech has a catchy solution: break up the companies. But a breakup will be hard to force, and the history of trustbusting suggests that many other solutions are possible. The article highlights several potential solutions to the problem of trustbusting