Sensemaking technologies for collaboration at scale

Keywords: computational analogy, nlp, sensemaking, crowdsourcing

In many collective creativity systems (where the number of contributors can number in the thousands to tens of thousands), enabling true collaboration is a difficult challenge. Without effective mechanisms for surfacing key insights and sharing them among collaborators, many collective creativity efforts devolve into largely independent work. This often results in a preponderance of redundant, shallow, and bad ideas. We want to know: how might we build sensemaking technologies that support effective collaboration at scale?

This research is primarily conducted with awesome collaborators, since I lack a background in machine learning. What I bring to the table is a deep understanding of what kinds of sensemaking are useful for creativity, and what representational requirements are necessary for supporting those kinds of sensemaking (e.g., relational knowledge for analogy), and techniques from crowdsourcing that can support or work in combination with novel machine learning systems.

What we’ve learned so far

  • Even simple, off-the-shelf machine sensemaking (e.g., Latent Semantic Analysis and k-means clustering) can improve people’s interaction with prior knowledge in a way that improves creativity (Chan, Dang, & Dow, 2016).
  • Most effective computational sensemaking tools are powered by human semantic judgments that are very tedious for humans to provide, making these tools expensive to build and improve, especially if domain knowledge is necessary. We’ve discovered a design pattern called “integrated crowdsourcing” which enables us to make large-scale collection of semantic judgments tractable by seamlessly integrating the judgments into primary tasks that people are already intrinsically motivated to perform (Siangliulue, Chan, Dow, & Gajos, 2016). Read a write-up about this here.
  • Analogy is really useful. But really hard to do computationally (Fu et al., 2013; Fu, Chan, Schunn, Cagan, & Kotovsky, 2013). Fortunately, we’ve found that combinations of {crowdsourcing, machine learning, and relaxing the requirement for fully-specified relational knowledge} can get us surprisingly close to human-like analogical reasoning over real-world documents (Kittur et al., 2019), ranging from relatively simple consumer product descriptions (Chan, Hope, Shahaf, & Kittur, 2016; Hope, Chan, Kittur, & Shahaf, 2017) to complex research paper abstracts (Chan, Chang, Hope, Shahaf, & Kittur, 2018). We’ve also developed new techniques that can help users explore different ways to express their queries (with abstractions) so that our algorithms can do a better job of finding analogies that are both useful and from different domains (Gilon et al., 2018).

What’s next

Here are some things we’re currently working on and/or pondering:

  • Can we create a scientific communication ecosystem (or modify the existing ones) where it becomes natural and effortless to communicate knowledge in more machine-readable ways?
  • How far can we get with “analogy-lite” representations of documents?
  • How can we scalably (e.g., [quasi]automatically) create human-readable/interpretable representations of large document collections?
  • Why is literature reviewing so painful? And how might computing systems help?

Related publications

  1. Scaling up Analogical Innovation with Crowds and AI Kittur, Aniket, Yu, Lixiu, Hope, Tom, Chan, Joel, Lifshitz-Assaf, Hila, Gilon, Karni, Ng, Felicia, Kraut, Robert E., and Shahaf, Dafna Proceedings of the National Academy of Sciences 2019 [Abstract] [PDF]
  2. SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers Chan, Joel, Chang, Joseph Chee, Hope, Tom, Shahaf, Dafna, and Kittur, Aniket Proceedings of ACM Human-Computer Interaction: CSCW 2018 [Abstract] [PDF]
  3. Analogy Mining for Specific Design Needs Gilon, Karni, Chan, Joel, Ng, Felicia Y, Assaf, Hila Lifshitz, Kittur, Aniket, and Shahaf, Dafna In Proceedings of the 2018 ACM SIGCHI Conference on Human Factors in Computing 2018 [Abstract] [PDF]
  4. Accelerating Innovation Through Analogy Mining Hope, Tom, Chan, Joel, Kittur, Aniket, and Shahaf, Dafna In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2017 Best Paper [Abstract] [PDF]
  5. IdeaHound: Improving Large-scale Collaborative Ideation with Crowd-Powered Real-time Semantic Modeling Siangliulue, Pao, Chan, Joel, Dow, Steven P., and Gajos, Krzysztof Z. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology 2016 [Abstract] [PDF]
  6. Comparing Different Sensemaking Approaches for Large-Scale Ideation Chan, Joel, Dang, Steven, and Dow, Steven P. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems 2016 [Abstract] [PDF]
  7. Scaling up Analogy with Crowdsourcing and Machine Learning. Chan, Joel, Hope, Tom, Shahaf, Dafna, and Kittur, Aniket In ICCBR Workshops 2016 [Abstract] [PDF]
  8. Expert representation of design repository space: A comparison to and validation of algorithmic output Fu, Katherine, Chan, Joel, Schunn, Christian, Cagan, Jonathan, and Kotovsky, Kenneth Design Studies 2013 [Abstract] [PDF]
  9. The Meaning of Near and Far: The Impact of Structuring Design Databases and the Effect of Distance of Analogy on Design Output Fu, Katherine, Chan, Joel, Cagan, Jonathan, Kotovsky, Kenneth, Schunn, Christian, and Wood, Kristin Journal of Mechanical Design 2013 [Abstract] [PDF]


Back up to main Research page See all papers