B|F|I System

BEAT-FLOW-INTERPLAY

Responsive Beat-Flow Production

Responsive Beat-Flow Production is an ongoing project funded by the Center for Africana Futures (CAF) and hosted by the Entertainment and Recording Industry Management (ERIM) program at the Texas Southern University (TSU) School of Communication (SOC). The project explores novel interactive production methods for hip hop music in which real-time beat making is responsive to word selection, and flow improvisation is informed by the beat maker’s sound choices.

BEAT-FLOW-INTERPLAY (BFI) system

The BEAT-FLOW-INTERPLAY (BFI) system is an interactive machine-learning-based, network-supported performance system for hip hop artists, beat makers, and musicians. This system was developed in Max8, utilizing the Fluid Corpus Manipulation (FluCoMa) toolkit available in this environment.

BFI Software

Software Package

Click HERE to download

Instructions

Click HERE to download



Demo Videos

Coming Soon

Click HERE to watch




Related
Research
Neuman, I. “TARTYP Generative Grammars and the Analysis of Hip Hop Beats”
Proceedings of the 2022 Sound and Music Computing Conference (Saint-Étienne, France)

ABSTRACT
Pierre Schaeffer’s musique concrete is considered to be the predecessor of digital sampling. The latter is a defining element in the development of hip hop. Drawing on historic and stylistic connections between these genres, we suggest the use of the Schaefferian core typology, the TARTYP, in the analysis of hip hop beats. We present a TARTYP classification of the beat’s basic components, based on samples taken from the pre-mastered multitrack sessions of professionally produced hip hop songs, mapping these components to TARTYP sound object types. We use this classification to define sets of rewrite rules in the TARTYP generative grammars that we have presented in previous studies. We demonstrate how a path extracted from a rewrite rule set within the TARTYP Balanced grammar regenerates a representative hip hop drum pattern.


PDF

Neuman, I. “Mapping Pitch Classes and Sound Objects: A Bridge Between Klumpenhouwer Networks and Schaeffer’s TARTYP,”
Proceedings of the 2018 Sound and Music Computing Conference (Limassol, Cyprus)

ABSTRACT
We present an interactive generative method for bridging between sound-object composition rooted in Pierre Schaeffer’s TARTYP taxonomy and transformational pitch-class composition ingrained in Klumpenhouwer Networks. We create a quantitative representation of sound objects within an ordered sound space. We use this representation to define a probability-based mapping of pitch classes to sound objects. We demonstrate the implementation of the method in a real-time compositional process that also utilizes our previous work on a TARTYP generative grammar tool and an interactive K-Network tool.


PDF

Neuman, I. “SIG~: Performance Interface for Schaefferian Sound-Object Improvisation,”
Proceedings of the 2015 International Computer Music Conference (Denton, TX)

ABSTRACT
Pierre Schaeffer’s theory of sound objects is a milestone in the historical development of electronic music. The TARTYP plays a central role in this theory. The TARTYP, however, is not widely accepted as a practical tool for musical analysis and composition, in part due to the large number of confusing and vague terms it introduces. This paper suggests a focus on Schaeffer’s sound recordings that exemplifies the TARTYP as a source for aural learning of this taxonomy and an improvisational approach that explores the practical applications of the TARTYP to real-time composition and computer improvisation. A software based on the TARTYP generative grammars and a performance system supporting this improvisational concept are presented along with specialized graphic notation of TARTYP sound objects set in animated scores. Finally the paper describes performance practices developed for SIG~, a Schaefferian improvisation group based in Iowa City.


PDF

Neuman, I. “Generative Tools for Interactive Composition: Real-Time Musical Structures Based on Schaeffer’s TARTYP and on Klumpenhouwer Networks,”
Computer Music Journal 38 no. 2 (2014)

ABSTRACT
Interactive computer music is comparable to improvisation because it includes elements of real-time composition performed by the computer. This process of real-time composition often incorporates stochastic techniques that remap a predetermined fundamental structure to a surface of sound processing. The hierarchical structure is used to pose restrictions on the stochastic processes, but, in most cases, the hierarchical structure in itself is not created in real time. This article describes how existing musical analysis methods can be converted into generative compositional tools that allow composers to generate musical structures in real time. It proposes a compositional method based on generative grammars derived from Pierre Schaeffer’s TARTYP, and describes the development of a compositional tool for real-time generation of Klumpenhouwer networks. The approach is based on the intersection of musical ideas with fundamental concepts in computer science including generative grammars, predicate logic, concepts of structural representation, and various methods of categorization.


PDF

Neuman, I. "Generative Grammars for Interactive Composition Based on Schaeffer’s TARTYP,”
Proceedings of the 2013 International Computer Music Conference (Perth, Australia)

ABSTRACT
Noam Chomsky’s Phrase Structure (PS) grammars and the [∑, F] form of rewrite rules are an efficient analytical tool for complex musical structures as well as a generative tool for classification-based compositional processes. Pierre Schaeffer’s summary table of sound typology, the TARTYP, is a milestone in the evolution of contemporary approaches to the organization of musical material. In this paper, we propose a compositional method that combines the TARTYP classification of sound objects with generative grammars derived from this table. These grammars enable the creation of musical structures that reflect the inter-relationships suggested by the table’s structure. The tools presented in this paper are designed for real-time compositions in interactive environments. They are embedded in Max/MSP or Pure Data as extensions of the MaxObject class and directly engage the sound processing capabilities of these environments. The complex musical structures generated by these tools are brought to life at the surface of the composition in a versatile way that uses the spectral signatures of sound objects from Schaeffer’s sound examples.



PDF




Acknowledgments

CAF logo

This research was generously supported by the Center for Africana Futures (CAF) at Texas Southern University. The Center for Africana Futures operates with funding provided by the Digital Ethnic Futures Consortium, a program supported by The Mellon Foundation.