08/01/2023

Contents

08/01/2023#

825. undergrads#

Hide code cell source
from faker import Faker

def generate_random_names(num_names):
    fake = Faker()
    names = [[fake.first_name(), fake.last_name()] for _ in range(num_names)]
    return names

def create_table(names):
    header = ["Number", "Name"]
    table = []
    table.append(header)
    for idx, name in enumerate(names, start=1):
        full_name = " ".join(name)
        row = [idx, full_name]
        table.append(row)

    # Printing the table
    for row in table:
        print(f"{row[0]:<10} {row[1]:<30}")

# Generate 10 random names and call the function
random_names = generate_random_names(10)
create_table(random_names)
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[1], line 1
----> 1 from faker import Faker
      3 def generate_random_names(num_names):
      4     fake = Faker()

ModuleNotFoundError: No module named 'faker'

826. graçias 🙏#

  • beta version of the fenagas webapp is up & running

  • andrew & fawaz will be first to test it

  • consider rolling it out to faculty in department

  • philosophe, elliot, betsy, and with residents

827. fenagas#

  • let ai say about fena “annuŋŋamya mu makubo ago butukirivu”

  • ai can’t produce such nuance on its own

  • but, guided, it can

828. music#

  • take me to church on apple music

  • my fave playlist

  • savor it!

829. boards#

  • my scores are expired

  • i need to retake them

  • will be fun with ai

  • consider a timeline

  • and with fenagas,llc?

  • just might have bandwidth

  • but its the synthesis of the two

  • that will be the most fun

830. breakthru#

Act I: Hypothesis - Navigating the Realm of Ideas and Concepts

In Act I, we embark on a journey of exploration, where ideas take center stage. We delve into the realm of hypotheses and concepts, laying the foundation for our scientific inquiry. From conceiving research questions to formulating testable propositions, Act I serves as the starting point of our intellectual pursuit. Through manuscripts, code, and Git, we learn to articulate and organize our ideas effectively, setting the stage for robust investigations and insightful discoveries.

Act II: Data - Unveiling the Power of Information

Act II unfolds as we dive into the realm of data, where raw information becomes the fuel for knowledge. Through the lenses of Python, AI, R, and Stata, we explore data collection, processing, and analysis. Act II empowers us to harness the potential of data and unleash its power in extracting meaningful insights. By mastering the tools to handle vast datasets and uncover patterns, Act II equips us to bridge the gap between theoretical hypotheses and empirical evidence.

Act III: Estimates - Seeking Truth through Inference

In Act III, we venture into the world of estimates, where statistical methods guide us in drawing meaningful conclusions. Nonparametric, semiparametric, parametric, and simulation techniques become our allies in the quest for truth. Act III enables us to infer population characteristics from sample data, making informed decisions and drawing reliable generalizations. Understanding the nuances of estimation empowers us to extract valuable information from limited observations, transforming data into actionable knowledge.

Act IV: Variance - Grappling with Uncertainty

Act IV brings us face to face with variance, where uncertainty and variability loom large. In the pursuit of truth, we encounter truth, rigor, error, sloppiness, and the unsettling specter of fraud. Act IV teaches us to navigate the intricacies of uncertainty, recognize the sources of variation, and identify potential pitfalls. By embracing variance, we fortify our methodologies, enhance the rigor of our analyses, and guard against errors and biases that may distort our findings.

Act V: Explanation - Illuminating the “Why” behind the “What”

Act V marks the pinnacle of our journey, where we seek to unravel the mysteries behind observed phenomena. Oneway, Twoway, Multivariable, Hierarchical, Clinical, and Public perspectives converge in a quest for understanding. Act V unfolds the rich tapestry of explanations, exploring causal relationships, uncovering hidden connections, and interpreting complex findings. By delving into the intricacies of explanation, Act V empowers us to communicate our discoveries, inspire new research avenues, and drive positive change in our scientific pursuits.

Epilogue: Embracing the Journey of Knowledge

In the Epilogue, we reflect on our expedition through Fenagas, celebrating the richness of knowledge and the evolution of our understanding. Open Science, Self-publishing, Published works, Grants, Proposals, and the interconnected world of Git & Spoke symbolize the culmination of our endeavors. Epilogue serves as a reminder of the ever-growing landscape of learning and the profound impact our contributions can have. Embracing the spirit of curiosity, we step forward, armed with newfound wisdom, to navigate the boundless seas of knowledge and ignite the flame of discovery in ourselves and others.

831. fenagas#

  • each paper, manuscript, or project should have its own set of repos

  • these will necessarily include a mixture of private and public repos

  • private repos will be used for collaboration

  • the public repos will be used for publication

  • fenagas is a private company and recruitor

  • so it will have its own set of repos as well

  • but the science and research will have its own repos

832. jerktaco#

  • oxtail

  • jerk chicken

  • sweet chilli-fried whole jerk snapper. is that a thing? quick google says yes.

833. eddie#

Kadi and Mark…

  • The square root of the number of employees you employ will do most of the work…

  • 5 classical composers created 95% of the classical music that’s played

  • and yet if you look at their music, only 5% of their music is what’s played 95% of the time”….

  • Debate

08/02/2023#

834. fena#

  • fawaz initally mistook persian and urdu for arabic

  • and read them out but said they made no sense

  • then recognized the “middle one” as arabic

  • with the meaning that is intended

  • but probably no idiomatic

Hide code cell source
data = [
    ("Eno yaffe ffena.", "Luganda", "Our and by us."),
    ("Nuestro y por nosotros", "Spanish", "Ours and by us"),
    ("Le nôtre et par nous", "French", "Ours and by us"),
    ("Unser und von uns", "German", "Ours and by us"),
    ("Nostro e da noi", "Italian", "Ours and by us"),
    ("Nosso e por nós", "Portuguese", "Ours and by us"),
    ("Ons en door ons", "Dutch", "Ours and by us"),
    ("Наш и нами", "Russian", "Ours and by us"),
    ("我们的,由我们提供", "Chinese", "Ours and by us"),
    ("हमारा और हमसे", "Nepali", "Ours and by us"),
    ("نا و توسط ما", "Persian", "Ours and by us"),
    ("私たちのものであり、私たちによって", "Japanese", "Ours and by us"),
    ("لنا وبواسطتنا", "Arabic", "Ours and by us"),
    ("שלנו ועל ידינו", "Hebrew", "Ours and by us"),
    ("Yetu na kwa sisi", "Swahili", "Ours and by us"),
    ("Yetu futhi ngathi sisi", "Zulu", "Ours and like us"),
    ("Tiwa ni aṣẹ ati nipa wa", "Yoruba", "Ours and through us"),
    ("A ka na anyi", "Igbo", "Ours and by us"),
    ("Korean", "Korean", "Ours and by us"),
    ("Meidän ja meidän toimesta", "Finnish", "Ours and by us"),
    ("ኦህድዎና በእኛ", "Amharic", "Ours and by us"),
    ("Hinqabu fi hinqabu jechuun", "Oromo", "Ours and through us"),
    ("ምንም ነገርና እኛ በእኛ", "Tigrinya", "Nothing and by us"),
    ("हमारा और हमसे", "Marathi", "Ours and by us"),
    ("અમારા અને અમારા દ્વારા", "Gujarati", "Ours and by us"),
    ("ما و توسط ما", "Urdu", "Ours and by us"),
    ("우리 것이며, 우리에 의해", "Korean", "Ours and by us"),  # New row for Korean
]

def print_table(data):
    print(" {:<4}  {:<25}  {:<15}  {:<25} ".format("No.", "Phrase", "Language", "English Translation"))
    print("" + "-" * 6 + "" + "-" * 32 + "" + "-" * 17 + "" + "-" * 27 + "")
    for idx, (phrase, language, translation) in enumerate(data, 1):
        print(" {:<4}  {:<25}  {:<15}  {:<25} ".format(idx, phrase, language, translation))

print_table(data)
 No.   Phrase                     Language         English Translation       
----------------------------------------------------------------------------------
 1     Eno yaffe ffena.           Luganda          Our and by us.            
 2     Nuestro y por nosotros     Spanish          Ours and by us            
 3     Le nôtre et par nous       French           Ours and by us            
 4     Unser und von uns          German           Ours and by us            
 5     Nostro e da noi            Italian          Ours and by us            
 6     Nosso e por nós            Portuguese       Ours and by us            
 7     Ons en door ons            Dutch            Ours and by us            
 8     Наш и нами                 Russian          Ours and by us            
 9     我们的,由我们提供                  Chinese          Ours and by us            
 10    हमारा और हमसे              Nepali           Ours and by us            
 11    نا و توسط ما               Persian          Ours and by us            
 12    私たちのものであり、私たちによって          Japanese         Ours and by us            
 13    لنا وبواسطتنا              Arabic           Ours and by us            
 14    שלנו ועל ידינו             Hebrew           Ours and by us            
 15    Yetu na kwa sisi           Swahili          Ours and by us            
 16    Yetu futhi ngathi sisi     Zulu             Ours and like us          
 17    Tiwa ni aṣẹ ati nipa wa    Yoruba           Ours and through us       
 18    A ka na anyi               Igbo             Ours and by us            
 19    Korean                     Korean           Ours and by us            
 20    Meidän ja meidän toimesta  Finnish          Ours and by us            
 21    ኦህድዎና በእኛ                  Amharic          Ours and by us            
 22    Hinqabu fi hinqabu jechuun  Oromo            Ours and through us       
 23    ምንም ነገርና እኛ በእኛ            Tigrinya         Nothing and by us         
 24    हमारा और हमसे              Marathi          Ours and by us            
 25    અમારા અને અમારા દ્વારા     Gujarati         Ours and by us            
 26    ما و توسط ما               Urdu             Ours and by us            
 27    우리 것이며, 우리에 의해             Korean           Ours and by us            

835. kind#

The statement you’ve provided suggests that someone who enjoys being surrounded by like-minded individuals (kindred spirits) may not have the energy or drive to compete or challenge themselves against strong adversaries or opponents. This seems to express a perspective on complacency, comfort zones, and personal growth.

However, it’s essential to understand that people have different paths to personal growth and self-fulfillment. While some thrive on challenge and adversity, finding it energizing and invigorating, others might find equal growth in collaboration, mutual understanding, and shared passions.

Realism, as the statement concludes, might refer to understanding and accepting these differences in human nature and personal paths, acknowledging that everyone has their unique journey and definition of success.

836. adversarial networks#

GANs pit two neural networks against each other in a kind of game, where one network (the generator) tries to create fake data that looks real, while the other (the discriminator) tries to tell real from fake.

  1. Supervised Learning, \(Y\): Algorithm is trained on labeled data.

  2. Unsupervised Learning, \(X\): Algorithm is trained on unlabeled data and looks for patterns.

  3. Semi-Supervised Learning, \(\beta\): Uses both labeled and unlabeled data for training.

  4. Reinforcement Learning, \(\epsilon\): Algorithm learns by interacting with an environment and receiving feedback in the form of rewards or penalties.

  5. Transfer Learning, \(z\): Using knowledge gained from one task to aid performance on a related, but different task.

  6. Generative Adversarial Networks, \(\rho\): A subset of unsupervised learning where two networks are trained together in a competitive fashion.

Hide code cell source
import pandas as pd

data = {
    "Type of ML": ["Supervised", "Unsupervised", "Semi-Supervised", "Reinforcement", "Transfer", "GANs"],
    "Pros": [
        "Direct feedback, High accuracy with enough data",
        "Works with unlabeled data, Can uncover hidden patterns",
        "Leverages large amounts of unlabeled data",
        "Adapts to dynamic environments, Potential for real-time learning",
        "Saves training time, Can leverage pre-trained models",
        "Generates new data, Can achieve impressive realism"
    ],
    "Cons": [
        "Needs labeled data, Can overfit",
        "No feedback, Harder to verify results",
        "Needs some labeled data, Combines challenges of both supervised and unsupervised",
        "Requires careful reward design, Can be computationally expensive",
        "Not always straightforward, Domain differences can be an issue",
        "Training can be unstable, May require lots of data and time"
    ]
}

df = pd.DataFrame(data)

for index, row in df.iterrows():
    print(f"Type of ML: {row['Type of ML']}\nPros: {row['Pros']}\nCons: {row['Cons']}\n{'-'*40}")
Type of ML: Supervised
Pros: Direct feedback, High accuracy with enough data
Cons: Needs labeled data, Can overfit
----------------------------------------
Type of ML: Unsupervised
Pros: Works with unlabeled data, Can uncover hidden patterns
Cons: No feedback, Harder to verify results
----------------------------------------
Type of ML: Semi-Supervised
Pros: Leverages large amounts of unlabeled data
Cons: Needs some labeled data, Combines challenges of both supervised and unsupervised
----------------------------------------
Type of ML: Reinforcement
Pros: Adapts to dynamic environments, Potential for real-time learning
Cons: Requires careful reward design, Can be computationally expensive
----------------------------------------
Type of ML: Transfer
Pros: Saves training time, Can leverage pre-trained models
Cons: Not always straightforward, Domain differences can be an issue
----------------------------------------
Type of ML: GANs
Pros: Generates new data, Can achieve impressive realism
Cons: Training can be unstable, May require lots of data and time
----------------------------------------

837. mbappé#

In the world of machine learning, there’s an architecture called Generative Adversarial Networks (GANs). A GAN consists of two neural networks: a generator and a discriminator. The generator creates fake data, while the discriminator evaluates data to determine if it’s real or generated by the generator. These networks are “adversaries”, and they improve through their competition with one another.

Mbappé in Ligue 1 is like the generator in a GAN:

  1. Competitiveness (Lack of a Worthy Adversary): If the discriminator is too weak (akin to the other Ligue 1 teams compared to PSG), then the generator might produce data (or performance) that seems impressive in its context, but might not be as refined as it would be if it faced a stronger discriminator. Just as the EPL could serve as a more challenging discriminator for Mbappé, making him fine-tune his “generation” of skills, a stronger discriminator in a GAN forces the generator to produce higher-quality data.

  2. Exposure to Challenges: If Mbappé were in the EPL (a stronger discriminator), he’d face more frequent and varied challenges, pushing him to adapt and refine his skills, much like a generator improving its data generation when pitted against a robust discriminator.

  3. Star Power & Champions League: Just as Mbappé gets to face high-level competition in the Champions League and play alongside top talents in PSG, a generator can still produce high-quality data when trained with superior techniques or in combination with other skilled “networks”, even if its regular discriminator isn’t top-tier.

  4. Future Moves & Evolution: Over time, a GAN might be fine-tuned or paired with stronger discriminators. Similarly, Mbappé might move to a more competitive league in the future, facing “stronger discriminators” that challenge and refine his game further.

In essence, for optimal growth and refinement, both a soccer player and a GAN benefit from being challenged regularly by worthy adversaries. PSG dominating Ligue 1 without a consistent worthy adversary might not push them to their absolute limits, just as a generator won’t produce its best possible data without a strong discriminator to challenge it.

838. lyrical#

  • kyrie eleison

  • lord deliver me

  • this is my exodus

He leads me beside still waters
He restoreth my soul
When you become a believer
Your spirit is made right
And sometimes, the soul doesn't get to notice
It has a hole in it
Due to things that's happened in the past
Hurt, abuse, molestation
But we wanna speak to you today and tell you
That God wants to heal the hole in your soul
Some people's actions are not because their spirit is wrong
But it's because the past has left a hole in their soul
May this wisdom help you get over your past
And remind you that God wants to heal the hole in your soul
I have my sister Le'Andria here
She's gonna help me share this wisdom
And tell this story
Lord
Deliver me, yeah
'Cause all I seem to do is hurt me
Hurt me, yeah
Lord
Deliver me
'Cause all I seem to do is hurt me
(Yes, sir)
Hurt me, yeah, yeah
(I know we should be finishing but)
(Sing it for me two more times)
Lord
Deliver me, yeah
'Cause all I seem to do is hurt me
(Na-ha)
Hurt me
(One more time)
Yeah
Lord
(Oh)
Deliver me
'Cause all I seem to do is hurt me, yeah
Hurt me, yeah
Whoa, yeah
And my background said
(Whoa-whoa, Lord)
Oh yeah (deliver me)
God rescued me from myself, from my overthinking
If you're listening out there
Just repeat after me if you're struggling with your past
And say it
(Oh, Lord, oh)
Let the Lord know, just say it, oh
(Oh, Lord, Lord)
He wants to restore your soul
He said
(Deliver me)
Hey
If my people, who are called by my name
Will move themselves and pray
(Deliver me)
Seek my face, turn from their wicked ways
I will hear from Heaven
Break it on down
So it is
It is so
Amen
Now when we pray
Wanna end that with a declaration, a decree
So I'm speaking for all of you listening
Starting here, starting now
The things that hurt you in the past won't control your future
Starting now, this is a new day
This is your exodus, you are officially released
Now sing it for me Le'Andria
Yeah
(This is my Exodus)
I'm saying goodbye
(This is my Exodus)
To the old me, yeah
(This is my Exodus)
Oh, oh, oh
(Thank you, Lord)
And I'm saying hello
(Thank you, Lord)
To the brand new me, yeah
(Thank you, Lord)
Yeah, yeah, yeah, yeah
This is
(This is my Exodus)
I declare it
(This is my Exodus)
And I decree
(This is my Exodus)
Oh this is, this day, this day is why I thank you, Lord
(This is my Exodus)
(Thank you, Lord)
Around
(Thank you, Lord)
For you and for me
(Thank you, Lord)
Yeah-hey-hey-yeah
Now, Lord God
(This is my Exodus)
Now, Lord God
(This is my Exodus)
It is my
(This is my Exodus)
The things that sent to break me down
(This is my Exodus)
Hey-hey-hey, hey-hey-hey, hey-hey-hey, hey-yeah
(Thank you, Lord)
(Thank you, Lord)
Every weapon
(Thank you, Lord)
God is you and to me, there for me
Source: Musixmatch
Songwriters: Donald Lawrence / Marshon Lewis / William James Stokes / Robert Woolridge / Desmond Davis

839. counterfeit#

In the context of competitive sports, the concept of “generating fakes” is indeed a fundamental aspect of gameplay. Athletes often use various techniques, such as dummies, side-steps, feints, or deceptive movements, to outwit their opponents and create opportunities for themselves or their teammates. These deceptive maneuvers act as the “generator” in the game, producing fake actions that challenge the opponent’s perception and decision-making.

Just like the generator in a GAN creates fake data to confuse the discriminator, athletes generate fake movements to deceive their opponents and gain an advantage. By presenting a range of possible actions, athletes keep their adversaries guessing and force them to make hasty decisions, potentially leading to mistakes or creating openings for an attack.

The effectiveness of generating fakes lies in the balance between unpredictability and precision. Just as a GAN’s generator must create data that is realistic enough to deceive the discriminator, athletes must execute their fakes with skill and timing to make them convincing and catch their opponents off guard.

Moreover, much like how the discriminator in a GAN becomes stronger by learning from previous encounters, athletes also improve their “discrimination” skills over time by facing various opponents with different playing styles and tactics. The experience of playing against worthy adversaries enhances an athlete’s ability to recognize and respond to deceptive movements, making them more refined in their decision-making and defensive actions.

In summary, generating fakes in competitive sports is a crucial aspect that parallels the dynamics of Generative Adversarial Networks. Just as a GAN benefits from facing a strong discriminator to refine its data generation, athletes grow and excel when regularly challenged by worthy adversaries who can test their ability to produce deceptive movements and refine their gameplay to the highest level.

840. music#

Composers in music, much like athletes in competitive sports and Generative Adversarial Networks (GANs), utilize the element of surprise and expectation to create captivating and emotionally engaging compositions. They play with the listener’s anticipation, offering moments of tension and resolution, which add depth and excitement to the musical experience.

In a musical composition, composers establish patterns, melodic motifs, and harmonic progressions that the listener subconsciously starts to expect. These expectations are the “discriminator” in this analogy, as they act as a reference point against which the composer can generate moments of tension and surprise, similar to the generator’s role in a GAN.

When a composer introduces a musical phrase that deviates from what the listener expects, it creates tension. This deviation can be through unexpected harmonies, dissonant intervals, rhythmic variations, or sudden changes in dynamics. This is akin to the “fake data” generated by the GAN’s generator or the deceptive movements used by athletes to outwit their opponents.

Just as a GAN’s discriminator learns from previous encounters to recognize fake data better, listeners’ musical discrimination skills improve over time as they become more familiar with different compositions and musical styles. As a result, composers must continually innovate and challenge the listener’s expectations to keep the music engaging and fresh.

The resolution in music, which ultimately satisfies the listener’s expectations, is the equivalent of a GAN’s generator producing data that appears realistic enough to deceive the discriminator successfully. Composers craft resolutions that give a sense of closure and fulfillment by returning to familiar themes, tonal centers, or melodic patterns.

A well-composed musical piece strikes a balance between unexpected twists and satisfying resolutions. Too many surprises without resolution can leave listeners disoriented and unsatisfied, just as a GAN’s generator may produce meaningless or unrealistic data. On the other hand, predictability without any element of surprise can result in boredom, both in music and in the world of sports.

Let’s illustrate this concept with a simple Python code snippet representing a musical script in the form of sheet music:

pip install music21

In this simple musical script, the notes and chords create an expected melody and progression in the key of C major. By introducing new harmonies or rhythms at strategic points, the composer can generate tension and surprise in the music, capturing the listener’s attention. Ultimately, the music will return to familiar notes and chords, resolving the tension and providing a satisfying conclusion.

In conclusion, just as GANs and competitive sports benefit from generating fakes and challenging adversaries, composers in music use the listener’s expectations and create tension through deviations, only to resolve it with familiar elements, creating a rich and engaging musical experience.

Hide code cell source
!pip install music21

import os
from music21 import *
from IPython.display import Image, display

# Set the path to the MuseScore executable
musescore_path = '/Applications/MuseScore 4.app/Contents/MacOS/mscore'
us = environment.UserSettings()
us['musicxmlPath'] = musescore_path

# Create a score
score = stream.Score()

# Create a tempo
tempo = tempo.MetronomeMark(number=120)

# Create a key signature (C major)
key_signature = key.KeySignature(0)

# Create a time signature (4/4)
time_signature = meter.TimeSignature('4/4')

# Create a music stream
music_stream = stream.Stream()

# Add the tempo, key signature, and time signature to the music stream
music_stream.append(tempo)
music_stream.append(key_signature)
music_stream.append(time_signature)

# Define a list of note names
notes = ['C', 'D', 'E', 'F', 'G', 'A', 'B', 'C5']

# Create notes and add them to the music stream
for note_name in notes:
    new_note = note.Note(note_name, quarterLength=1)
    music_stream.append(new_note)

# Define a list of chords
chords = [chord.Chord(['C', 'E', 'G']), chord.Chord(['F', 'A', 'C']), chord.Chord(['G', 'B', 'D'])]

# Add chords to the music stream
for c in chords:
    music_stream.append(c)

# Add the music stream to the score
# score.insert(0, music_stream)

# Check the contents of the music_stream
# print(music_stream.show('text'))

# Save the score as MusicXML
musicxml_path = '/users/d/desktop/music21_example.musicxml'
# score.write('musicxml', fp=musicxml_path)

# Define the path for the PNG image
# png_path = '/users/d/desktop/music21_example.png'

# Convert the MusicXML to a PNG image
# conv = converter.subConverters.ConverterMusicXML()
# conv.write(score, 'png', png_path)

# Display the PNG image
# display(Image(filename=png_path))

# Clean up temporary files if desired
# os.remove(musicxml_path)
# os.remove(png_path)
Requirement already satisfied: music21 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (9.1.0)
Requirement already satisfied: chardet in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (5.2.0)
Requirement already satisfied: joblib in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (1.3.1)
Requirement already satisfied: jsonpickle in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (3.0.1)
Requirement already satisfied: matplotlib in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (3.7.2)
Requirement already satisfied: more-itertools in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (10.1.0)
Requirement already satisfied: numpy in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (1.25.2)
Requirement already satisfied: requests in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (2.31.0)
Requirement already satisfied: webcolors>=1.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from music21) (1.13)
Requirement already satisfied: contourpy>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (4.42.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (23.1)
Requirement already satisfied: pillow>=6.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (10.0.0)
Requirement already satisfied: pyparsing<3.1,>=2.3.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib->music21) (2.8.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from requests->music21) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from requests->music21) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from requests->music21) (2.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from requests->music21) (2023.7.22)
Requirement already satisfied: six>=1.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib->music21) (1.16.0)
{0.0} <music21.tempo.MetronomeMark animato Quarter=120>
{0.0} <music21.key.KeySignature of no sharps or flats>
{0.0} <music21.meter.TimeSignature 4/4>
{0.0} <music21.note.Note C>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{3.0} <music21.note.Note F>
{4.0} <music21.note.Note G>
{5.0} <music21.note.Note A>
{6.0} <music21.note.Note B>
{7.0} <music21.note.Note C>
{8.0} <music21.chord.Chord C E G>
{9.0} <music21.chord.Chord F A C>
{10.0} <music21.chord.Chord G B D>
None
_images/6319dffe5192321d62878dffed76ff5683b5e278e4185a6d88cbd2a208fa523b.png

841. learning#

  • generative adversarial networks

  • challenge-level, skill-level, and equiping students with the right tools to “level up”

  • use this approach to create a “learning” GAN for any sort of course but starting with a course on Stata

To design a Stata Programming class with the flexibility to adapt it into Python and R Programming classes, we can organize the content according to the provided headings in the _toc.yml file. We will structure the course into five acts, and each act will contain three to six scenes representing different chapters or topics. Each scene will be a learning module or topic that covers a specific aspect of Stata programming (and later Python and R programming).

Let’s begin by creating the _toc.yml:

Skip to main content

Have any feedback? Please participate in this survey
Logo image
Fenagas
Prologue

Act I
Manuscripts
Code
Git

Act II
Python
AI
R
Stata

Act III
Nonparametric
Semiparametric
Parametric
Simulation
Uses, abuses

Act IV
Truth
Rigor
Error
Sloppiness
Fraud
Learning

Act V
Oneway
Twoway
Multivariable
Hierarchical
Clinical
Public
Epilogue

Open Science
Self publish
Published
Grants
Proposals
Git & Spoke

Automate
Bash
Unix
Courses

Stata Programming

Now, let’s create a brief description for each act and scene:

Act I - Introduction to Research Manuscripts and Version Control

Scene 1 - Understanding Research Manuscripts This scene will provide an overview of research manuscripts, their structure, and the importance of clear documentation in reproducible research.

Scene 2 - Introduction to Code In this scene, we will introduce coding concepts, syntax, and the use of Stata, Python, and R as programming languages for data analysis.

Scene 3 - Version Control with Git Students will learn the fundamentals of version control using Git, a powerful tool for tracking changes in code and collaborating with others.

Act II - Exploring Data Analysis with Python, AI, R, and Stata

Scene 1 - Python for Data Analysis This scene will cover basic data analysis tasks using Python, focusing on data manipulation, visualization, and statistical analysis.

Scene 2 - Introduction to Artificial Intelligence (AI) Students will gain insights into AI concepts and applications, including machine learning, deep learning, and generative adversarial networks (GANs).

Scene 3 - R for Data Science In this scene, we’ll explore R’s capabilities for data analysis, statistical modeling, and creating visualizations.

Scene 4 - Introduction to Stata Students will be introduced to Stata programming, including data management, analysis, and graphing features.

Act III - Advanced Topics in Data Analysis

Scene 1 - Nonparametric Statistics This scene will delve into nonparametric statistical methods and their applications in various research scenarios.

Scene 2 - Semiparametric Statistics Students will learn about semiparametric models and their advantages in handling complex data structures.

Scene 3 - Parametric Modeling This scene will cover parametric statistical models and their assumptions, along with practical implementation in the chosen programming languages.

Scene 4 - Simulation Techniques In this scene, students will learn about simulation methods to replicate observed data and explore “what if” scenarios in their analyses.

Scene 5 - Data Analysis Uses and Abuses We will discuss common mistakes and pitfalls in data analysis, emphasizing the importance of data integrity and robustness.

Act IV - Ensuring Data Quality and Integrity

Scene 1 - Seeking Truth in Research This scene will highlight the importance of truth-seeking in research and the impact of biased results on scientific discoveries.

Scene 2 - Rigorous Research Methods Students will learn about various rigorous research methodologies to ensure valid and reliable findings.

Scene 3 - Identifying and Addressing Errors We will explore different types of errors in research and how to identify and correct them during the data analysis process.

Scene 4 - Preventing Sloppiness in Analysis This scene will discuss best practices to avoid careless mistakes in data analysis that may compromise research outcomes.

Scene 5 - Fraud Detection in Research Students will explore methods and approaches to detect and prevent fraud in clinical and public health research.

Scene 6 - Learning from Data Drawing inspiration from Generative Adversarial Networks (GANs), this scene will encourage students to learn from data by simulating expected outcomes based on observed data.

Act V - Advanced Data Visualization and Reporting

Scene 1 - Oneway Plots and Scatterplots This scene will focus on creating oneway plots and scatterplots with jitter and overlapped mean and 95% CI bars to compare variables.

Scene 2 - Twoway Plots and Multivariable Visualization We will cover twoway plots and multivariable visualizations to explore relationships between multiple variables.

Scene 3 - Hierarchical Data Visualization Students will learn techniques for visualizing hierarchical data structures effectively.

Scene 4 - Data Visualization in Clinical Research This scene will demonstrate visualization methods specifically tailored to clinical research scenarios.

Scene 5 - Communicating Research to the Public In this final scene, we will explore effective ways to communicate research findings to the general public.

Epilogue - Advancing Open Science and Professional Development

Scene 1 - Embracing Open Science This scene will emphasize the importance of open science practices and promoting transparency in research.

Scene 2 - Self-publishing and Open Access Students will learn about self-publishing options and the benefits of open access to research outputs.

Scene 3 - Getting Published in Journals This scene will guide students through the process of getting research published in academic journals.

Scene 4 - Grant Writing and Proposal Development We will cover essential aspects of grant writing and proposal development for research funding.

Scene 5 - Git and Collaborative Workflows In this final scene, students will explore advanced Git workflows for collaborative programming projects.

Automate - Bash and Unix Scripting for Automation This optional chapter will introduce students to automation using bash and Unix scripting.

Courses - Explore Other Programming Courses Students will be provided with resources to explore more specialized programming courses beyond Stata, Python, and R.

With this _toc.yml and the organized content for each part and scene, the Stata Programming class can be easily adapted into Python and R Programming classes by modifying the programming language-specific examples and exercises while keeping the underlying concepts and topics consistent.

Based on the provided _toc.yml in the jb-book format, here is the redesigned Stata Programming class:


Please note that the content of each chapter may need to be written separately in the corresponding .ipynb or .md files. The organization of the class remains the same with five acts, each containing multiple scenes (chapters) covering different topics in Stata programming. You can add content and explanations in the respective files according to the intended class structure.


Sure, here’s the redesigned Stata Programming class with all files as .ipynb:

With this format, all chapters are now represented as .ipynb files, making it easier to create, manage, and access the content in Jupyter Notebook format. Please ensure that the content of each .ipynb file is written appropriately to deliver the Stata Programming class effectively.


Sure, here’s the redesigned Stata Programming class with 10 scenes per act:

Root: intro.ipynb
Title: Fenagas

Parts:
- Caption:
  Chapters:
  - File: prologue.ipynb

- Caption: Act I
  Chapters:
  - File: content/lessons/l1/act1_1.ipynb
  - File: content/lessons/l1/act1_2.ipynb
  - File: content/lessons/l1/act1_3.ipynb
  - File: content/lessons/l1/act1_4.ipynb
  - File: content/lessons/l1/act1_5.ipynb
  - File: content/lessons/l1/act1_6.ipynb
  - File: content/lessons/l1/act1_7.ipynb
  - File: content/lessons/l1/act1_8.ipynb
  - File: content/lessons/l1/act1_9.ipynb
  - File: content/lessons/l1/act1_10.ipynb

- Caption: Act II
  Chapters:
  - File: content/lessons/l2/act2_1.ipynb
  - File: content/lessons/l2/act2_2.ipynb
  - File: content/lessons/l2/act2_3.ipynb
  - File: content/lessons/l2/act2_4.ipynb
  - File: content/lessons/l2/act2_5.ipynb
  - File: content/lessons/l2/act2_6.ipynb
  - File: content/lessons/l2/act2_7.ipynb
  - File: content/lessons/l2/act2_8.ipynb
  - File: content/lessons/l2/act2_9.ipynb
  - File: content/lessons/l2/act2_10.ipynb

- Caption: Act III
  Chapters:
  - File: content/lessons/l3/act3_1.ipynb
  - File: content/lessons/l3/act3_2.ipynb
  - File: content/lessons/l3/act3_3.ipynb
  - File: content/lessons/l3/act3_4.ipynb
  - File: content/lessons/l3/act3_5.ipynb
  - File: content/lessons/l3/act3_6.ipynb
  - File: content/lessons/l3/act3_7.ipynb
  - File: content/lessons/l3/act3_8.ipynb
  - File: content/lessons/l3/act3_9.ipynb
  - File: content/lessons/l3/act3_10.ipynb

- Caption: Act IV
  Chapters:
  - File: content/lessons/l4/act4_1.ipynb
  - File: content/lessons/l4/act4_2.ipynb
  - File: content/lessons/l4/act4_3.ipynb
  - File: content/lessons/l4/act4_4.ipynb
  - File: content/lessons/l4/act4_5.ipynb
  - File: content/lessons/l4/act4_6.ipynb
  - File: content/lessons/l4/act4_7.ipynb
  - File: content/lessons/l4/act4_8.ipynb
  - File: content/lessons/l4/act4_9.ipynb
  - File: content/lessons/l4/act4_10.ipynb

- Caption: Act V
  Chapters:
  - File: content/lessons/l5/act5_1.ipynb
  - File: content/lessons/l5/act5_2.ipynb
  - File: content/lessons/l5/act5_3.ipynb
  - File: content/lessons/l5/act5_4.ipynb
  - File: content/lessons/l5/act5_5.ipynb
  - File: content/lessons/l5/act5_6.ipynb
  - File: content/lessons/l5/act5_7.ipynb
  - File: content/lessons/l5/act5_8.ipynb
  - File: content/lessons/l5/act5_9.ipynb
  - File: content/lessons/l5/act5_10.ipynb

- Caption: Epilogue
  Chapters:
  - File: content/lessons/l6/epi_1.ipynb
  - File: content/lessons/l6/epi_2.ipynb
  - File: content/lessons/l6/epi_3.ipynb
  - File: content/lessons/l6/epi_4.ipynb
  - File: content/lessons/l6/epi_5.ipynb
  - File: content/lessons/l6/epi_6.ipynb
  - File: content/lessons/l6/epi_7.ipynb
  - File: content/lessons/l6/epi_8.ipynb
  - File: content/lessons/l6/epi_9.ipynb
  - File: content/lessons/l6/epi_10.ipynb

- Caption: Git & Spoke
  Chapters:
  - File: content/lessons/l7/act7_1.ipynb
  - File: content/lessons/l7/act7_2.ipynb
  - File: content/lessons/l7/act7_3.ipynb
  - File: content/lessons/l7/act7_4.ipynb
  - File: content/lessons/l7/act7_5.ipynb
  - File: content/lessons/l7/act7_6.ipynb
  - File: content/lessons/l7/act7_7.ipynb
  - File: content/lessons/l7/act7_8.ipynb
  - File: content/lessons/l7/act7_9.ipynb
  - File: content/lessons/l7/act7_10.ipynb

- Caption: Feedback
  Chapters:
  - File: content/lessons/1_survey9.ipynb

- Caption: About Fena
  Chapters:
  - File: content/lessons/about/cite.ipynb
  - File: content/lessons/about/changelog.ipynb

- Caption: Courses
  Chapters:
  - URL: https://publichealth.jhu.edu/courses
    Title: Stata Programming
  - File: content/dramatispersonae/high_school_students/high_school_students.ipynb
  - File: content/dramatispersonae/undergraduates/undergraduates.ipynb
  - File: content/dramatispersonae/graduate_students/graduate_students.ipynb
  - File: content/dramatispersonae/medical_students/medical_students.ipynb

Sure! Below is a bash script named stataclass.sh that creates the folder structure and places the .ipynb files in the appropriate folders:

#!/bin/bash

# Create folders
mkdir -p content/lessons/l1
mkdir -p content/lessons/l2
mkdir -p content/lessons/l3
mkdir -p content/lessons/l4
mkdir -p content/lessons/l5
mkdir -p content/lessons/l6
mkdir -p content/lessons/l7
mkdir -p content/dramatispersonae/high_school_students
mkdir -p content/dramatispersonae/undergraduates
mkdir -p content/dramatispersonae/graduate_students
mkdir -p content/dramatispersonae/medical_students
mkdir -p content/dramatispersonae/residents
mkdir -p content/dramatispersonae/fellows
mkdir -p content/dramatispersonae/faculty
mkdir -p content/dramatispersonae/analysts
mkdir -p content/dramatispersonae/staff
mkdir -p content/dramatispersonae/collaborators
mkdir -p content/dramatispersonae/graduates
mkdir -p content/dramatispersonae/high_school_students/calvin_mathews
mkdir -p content/dramatispersonae/medical_students/andrew_arking
mkdir -p content/lessons/l6
mkdir -p content/lessons/l7

# Place the .ipynb files in appropriate folders
cp prologue.ipynb content/
cp content/lessons/l1/act1_1.ipynb content/lessons/l1/
cp content/lessons/l1/act1_2.ipynb content/lessons/l1/
cp content/lessons/l1/act1_3.ipynb content/lessons/l1/
cp content/lessons/l1/act1_4.ipynb content/lessons/l1/
cp content/lessons/l1/act1_5.ipynb content/lessons/l1/
cp content/lessons/l1/act1_6.ipynb content/lessons/l1/
cp content/lessons/l1/act1_7.ipynb content/lessons/l1/
cp content/lessons/l1/act1_8.ipynb content/lessons/l1/
cp content/lessons/l1/act1_9.ipynb content/lessons/l1/
cp content/lessons/l1/act1_10.ipynb content/lessons/l1/

# Continue copying .ipynb files to other folders as required...

echo "Folder structure and files have been created successfully."

This script will create all the necessary folders and copy the .ipynb files to their respective locations, as specified in the _toc.yml format. Note that you’ll need to manually complete the script by copying the remaining .ipynb files to the corresponding folders based on the _toc.yml. Save the script and run it to create the desired folder structure and files.

Absolutely! You are right; there is a more elegant and efficient way to create the folders and files using a bash loop. Here’s the updated version of the stataclass.sh script using your suggestion:

#!/bin/bash

# Create folders
mkdir -p content/lessons/l{1..7}
mkdir -p content/dramatispersonae/{high_school_students,undergraduates,graduate_students,medical_students,residents,fellows,faculty,analysts,staff,collaborators,graduates}
mkdir -p content/dramatispersonae/high_school_students/calvin_mathews
mkdir -p content/dramatispersonae/medical_students/andrew_arking

# Place the .ipynb files in appropriate folders
cp prologue.ipynb content/
cp content/lessons/l1/act1_{1..10}.ipynb content/lessons/l1/

# Continue copying .ipynb files to other folders as required...

echo "Folder structure and files have been created successfully."

This script uses brace expansion in the mkdir command to create the necessary folders in a more concise way. It also utilizes the bash loop with the array to create the .ipynb files for the dramatispersonae folder. It is both shorter and more readable, making it a more elegant solution. Just make sure to continue copying the remaining .ipynb files to the corresponding folders as specified in the _toc.yml format.

842. act1#

  • give students a tool to learn how to learn

  • a template of an entire manuscript.ipynb file

  • the python, ai, r, and stata programming scripts that support the manuscript.ipynb file

  • step-by-step instructions on creating a github account, a public, and private repository

  • push content to the public repository and use gh-pages to publish the content

843. streamline#

#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Uncomment the following line if you need to create the "three40" directory
# nano three40.sh & paste the contents of the three40.sh file
# chmod +x three40.sh
# mkdir three40
# cd three40
# nano _toc.yml & paste the contents of the _toc.yml file

# Create the root folder
# mkdir -p three40

# Create the "intro.ipynb" file inside the "root" folder
touch three40/intro.ipynb

# Function to read the chapters from the YAML file using pure bash
get_chapters_from_yaml() {
  local part="$1"
  local toc_file="_toc.yml"
  local lines
  local in_part=false

  while read -r line; do
    if [[ "$line" == *"$part"* ]]; then
      in_part=true
    elif [[ "$line" == *"- File: "* ]]; then
      if "$in_part"; then
        echo "$line" | awk -F': ' '{print $2}' | tr -d ' '
      fi
    elif [[ "$line" == *"-"* ]]; then
      in_part=false
    fi
  done < "$toc_file"
}

# Create parts and chapters based on the _toc.yml structure
parts=(
  "Act I"
  "Act II"
  "Act III"
  "Act IV"
  "Act V"
  "Epilogue"
  "Git & Spoke"
  "Courses"
)

# Loop through parts and create chapters inside each part folder
for part in "${parts[@]}"; do
  part_folder="three40/$part"
  mkdir -p "$part_folder"

  # Get the chapters for the current part from the _toc.yml
  chapters=($(get_chapters_from_yaml "$part"))

  # Create chapter files inside the part folder
  for chapter in "${chapters[@]}"; do
    touch "$part_folder/$chapter"
  done
done

# Create folders for dramatispersonae
files=(
  "high_school_students/high_school_students.ipynb"
  "undergraduates/undergraduates.ipynb"
  "graduate_students/graduate_students.ipynb"
  "medical_students/medical_students.ipynb"
  "residents/residents.ipynb"
  "fellows/fellows.ipynb"
  "faculty/faculty.ipynb"
  "analysts/analysts.ipynb"
  "staff/staff.ipynb"
  "collaborators/collaborators.ipynb"
  "graduates/graduates.ipynb"
  "high_school_students/calvin_mathews/calvin_mathews.ipynb"
  "medical_students/andrew_arking/andrew_arking.ipynb"
  "medical_students/andrew_arking/andrew_arking_1.ipynb"
  "collaborators/fawaz_al_ammary/fawaz_al_ammary.ipynb"
  "collaborators/fawaz_al_ammary/fawaz_al_ammary_1.ipynb"
)

# Loop through the file paths and create the corresponding directories
for file_path in "${files[@]}"; do
  # Remove the common prefix "content/dramatispersonae/" from the file path
  dir_path=${file_path#content/dramatispersonae/}
  
  # Create the directory
  mkdir -p "three40/content/dramatispersonae/$dir_path"
done

echo "Folder structure has been created successfully."
Root: intro.ipynb
Title: Fenagas

Parts:
- Caption:
  Chapters:
  - File: prologue.ipynb

- Caption: Act I
  Chapters:
  - File: content/lessons/l1/act1_1.ipynb
  - File: content/lessons/l1/act1_2.ipynb
  - File: content/lessons/l1/act1_3.ipynb
  - File: content/lessons/l1/act1_4.ipynb
  - File: content/lessons/l1/act1_5.ipynb
  - File: content/lessons/l1/act1_6.ipynb
  - File: content/lessons/l1/act1_7.ipynb
  - File: content/lessons/l1/act1_8.ipynb
  - File: content/lessons/l1/act1_9.ipynb
  - File: content/lessons/l1/act1_10.ipynb

- Caption: Act II
  Chapters:
  - File: content/lessons/l2/act2_1.ipynb
  - File: content/lessons/l2/act2_2.ipynb
  - File: content/lessons/l2/act2_3.ipynb
  - File: content/lessons/l2/act2_4.ipynb
  - File: content/lessons/l2/act2_5.ipynb
  - File: content/lessons/l2/act2_6.ipynb
  - File: content/lessons/l2/act2_7.ipynb
  - File: content/lessons/l2/act2_8.ipynb
  - File: content/lessons/l2/act2_9.ipynb
  - File: content/lessons/l2/act2_10.ipynb

- Caption: Act III
  Chapters:
  - File: content/lessons/l3/act3_1.ipynb
  - File: content/lessons/l3/act3_2.ipynb
  - File: content/lessons/l3/act3_3.ipynb
  - File: content/lessons/l3/act3_4.ipynb
  - File: content/lessons/l3/act3_5.ipynb
  - File: content/lessons/l3/act3_6.ipynb
  - File: content/lessons/l3/act3_7.ipynb
  - File: content/lessons/l3/act3_8.ipynb
  - File: content/lessons/l3/act3_9.ipynb
  - File: content/lessons/l3/act3_10.ipynb

- Caption: Act IV
  Chapters:
  - File: content/lessons/l4/act4_1.ipynb
  - File: content/lessons/l4/act4_2.ipynb
  - File: content/lessons/l4/act4_3.ipynb
  - File: content/lessons/l4/act4_4.ipynb
  - File: content/lessons/l4/act4_5.ipynb
  - File: content/lessons/l4/act4_6.ipynb
  - File: content/lessons/l4/act4_7.ipynb
  - File: content/lessons/l4/act4_8.ipynb
  - File: content/lessons/l4/act4_9.ipynb
  - File: content/lessons/l4/act4_10.ipynb

- Caption: Act V
  Chapters:
  - File: content/lessons/l5/act5_1.ipynb
  - File: content/lessons/l5/act5_2.ipynb
  - File: content/lessons/l5/act5_3.ipynb
  - File: content/lessons/l5/act5_4.ipynb
  - File: content/lessons/l5/act5_5.ipynb
  - File: content/lessons/l5/act5_6.ipynb
  - File: content/lessons/l5/act5_7.ipynb
  - File: content/lessons/l5/act5_8.ipynb
  - File: content/lessons/l5/act5_9.ipynb
  - File: content/lessons/l5/act5_10.ipynb

- Caption: Epilogue
  Chapters:
  - File: content/lessons/l6/epi_1.ipynb
  - File: content/lessons/l6/epi_2.ipynb
  - File: content/lessons/l6/epi_3.ipynb
  - File: content/lessons/l6/epi_4.ipynb
  - File: content/lessons/l6/epi_5.ipynb
  - File: content/lessons/l6/epi_6.ipynb
  - File: content/lessons/l6/epi_7.ipynb
  - File: content/lessons/l6/epi_8.ipynb
  - File: content/lessons/l6/epi_9.ipynb
  - File: content/lessons/l6/epi_10.ipynb

- Caption: Git & Spoke
  Chapters:
  - File: content/lessons/l7/act7_1.ipynb
  - File: content/lessons/l7/act7_2.ipynb
  - File: content/lessons/l7/act7_3.ipynb
  - File: content/lessons/l7/act7_4.ipynb
  - File: content/lessons/l7/act7_5.ipynb
  - File: content/lessons/l7/act7_6.ipynb
  - File: content/lessons/l7/act7_7.ipynb
  - File: content/lessons/l7/act7_8.ipynb
  - File: content/lessons/l7/act7_9.ipynb
  - File: content/lessons/l7/act7_10.ipynb

- Caption: Courses
  Chapters:
  - URL: https://publichealth.jhu.edu/courses
    Title: Stata Programming
  - file: content/dramatispersonae/high_school_students/high_school_students.ipynb
  - file: content/dramatispersonae/undergraduates/undergraduates.ipynb
  - file: content/dramatispersonae/graduate_students/graduate_students.ipynb
  - file: content/dramatispersonae/medical_students/medical_students.ipynb
  - file: content/dramatispersonae/residents/residents.ipynb
  - file: content/dramatispersonae/fellows/fellows.ipynb
  - file: content/dramatispersonae/faculty/faculty.ipynb
  - file: content/dramatispersonae/analysts/analysts.ipynb
  - file: content/dramatispersonae/staff/staff.ipynb
  - file: content/dramatispersonae/collaborators/collaborators.ipynb
  - file: content/dramatispersonae/graduates/graduates.ipynb
  - file: content/dramatispersonae/high_school_students/calvin_mathews/calvin_mathews.ipynb
  - file: content/dramatispersonae/medical_students/andrew_arking/andrew_arking.ipynb
  - file: content/dramatispersonae/medical_students/andrew_arking/andrew_arking_1.ipynb
  - file: content/dramatispersonae/collaborators/fawaz_al_ammary/fawaz_al_ammary.ipynb
  - file: content/dramatispersonae/collaborators/fawaz_al_ammary/fawaz_al_ammary_1.ipynb

844. revolution#

#!/bin/bash

# Step 1: Navigate to the '1f.ἡἔρις,κ' directory in the 'dropbox' folder
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Step 2: Create and edit the 'three40.sh' file using 'nano'
nano three40.sh

# Step 3: Add execute permissions to the 'three40.sh' script
chmod +x three40.sh

# Step 4: Run the 'three40.sh' script
./three40.sh

# Step 5: Create the 'three40' directory
mkdir three40

# Step 6: Navigate to the 'three40' directory
cd three40

# Step 7: Create and edit the '_toc.yml' file using 'nano'
nano _toc.yml

three40/
├── intro.ipynb
├── prologue.ipynb
├── content/
│   └── lessons/
│       └── l1/
│           ├── act1_1.ipynb
│           ├── act1_2.ipynb
│           ├── act1_3.ipynb
│           └── ...
│       └── l2/
│           ├── act2_1.ipynb
│           ├── act2_2.ipynb
│           └── ...
│       └── ...
│       └── l7/
│           ├── act7_1.ipynb
│           ├── act7_2.ipynb
│           └── ...
├── dramatispersonae/
│   └── high_school_students/
│       └── ...
│   └── undergraduates/
│       └── ...
│   └── ...
│   └── graduates/
│       └── ...
└── ...

845. yml#

#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Uncomment the following line if you need to create the "three40" directory
# nano three40.sh & paste the contents of the three40.sh file
# chmod +x three40.sh
# mkdir three40
# cd three40
# Create the root folder
mkdir -p three40

# nano three40/_toc.yml & paste the contents of the _toc.yml file

# Create the "intro.ipynb" file inside the "root" folder
touch three40/intro.ipynb

# Function to read the chapters from the YAML file using pure bash
get_chapters_from_yaml() {
  local part="$1"
  local toc_file="three40/_toc.yml"
  local lines
  local in_part=false

  while read -r line; do
    if [[ "$line" == *"$part"* ]]; then
      in_part=true
    elif [[ "$line" == *"- File: "* ]]; then
      if "$in_part"; then
        echo "$line" | awk -F': ' '{print $2}' | tr -d ' '
      fi
    elif [[ "$line" == *"-"* ]]; then
      in_part=false
    fi
  done < "$toc_file"
}

# Create parts and chapters based on the _toc.yml structure
parts=(
  "Act I"
  "Act II"
  "Act III"
  "Act IV"
  "Act V"
  "Epilogue"
  "Git & Spoke"
  "Courses"
)

# Loop through parts and create chapters inside each part folder
for part in "${parts[@]}"; do
  part_folder="three40/$part"
  mkdir -p "$part_folder"

  # Get the chapters for the current part from the _toc.yml
  chapters=($(get_chapters_from_yaml "$part"))

  # Create chapter files inside the part folder
  for chapter in "${chapters[@]}"; do
    touch "$part_folder/$chapter.ipynb"
  done
done

echo "Folder structure has been created successfully."

846. iteration~30#

846.1. structure#

Based on the provided information and incorporating the details under the “dramatispersonae” folder, the entire “three40/” directory structure will look like this:

three40/
├── intro.ipynb
├── prologue.ipynb
├── Act I/
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── dramatispersonae/
    ├── high_school_students/
       ├── high_school_students_1/
          └── ...
       ├── high_school_students_2/
          └── ...
       ├── high_school_students_3/
          └── ...
       ├── high_school_students_4/
          └── ...
       └── high_school_students_5/
           └── ...
    ├── undergraduates/
       ├── undergraduates_1/
          └── ...
       ├── undergraduates_2/
          └── ...
       ├── undergraduates_3/
          └── ...
       ├── undergraduates_4/
          └── ...
       └── undergraduates_5/
           └── ...
    ├── graduates/
       ├── graduates_1/
          └── ...
       ├── graduates_2/
          └── ...
       ├── graduates_3/
          └── ...
       ├── graduates_4/
          └── ...
       └── graduates_5/
           └── ...
    ├── medical_students/
       ├── medical_students_1/
          └── ...
       ├── medical_students_2/
          └── ...
       ├── medical_students_3/
          └── ...
       ├── medical_students_4/
          └── ...
       └── medical_students_5/
           └── ...
    ├── residents/
       ├── residents_1/
          └── ...
       ├── residents_2/
          └── ...
       ├── residents_3/
          └── ...
       ├── residents_4/
          └── ...
       └── residents_5/
           └── ...
    ├── fellows/
       ├── fellows_1/
          └── ...
       ├── fellows_2/
          └── ...
       ├── fellows_3/
          └── ...
       ├── fellows_4/
          └── ...
       └── fellows_5/
           └── ...
    ├── faculty/
       ├── faculty_1/
          └── ...
       ├── faculty_2/
          └── ...
       ├── faculty_3/
          └── ...
       ├── faculty_4/
          └── ...
       └── faculty_5/
           └── ...
    ├── analysts/
       ├── analysts_1/
          └── ...
       ├── analysts_2/
          └── ...
       ├── analysts_3/
          └── ...
       ├── analysts_4/
          └── ...
       └── analysts_5/
           └── ...
    ├── staff/
       ├── staff_1/
          └── ...
       ├── staff_2/
          └── ...
       ├── staff_3/
          └── ...
       ├── staff_4/
          └── ...
       └── staff_5/
           └── ...
    └── collaborators/
        ├── collaborators_1/
           └── ...
        ├── collaborators_2/
           └── ...
        ├── collaborators_3/
           └── ...
        ├── collaborators_4/
           └── ...
        └── collaborators_5/
            └── ...

Note: The ellipsis (…) is used to represent additional files and directories that may be present but are not explicitly shown in the provided structure.

846.2. script#

#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Uncomment the following line if you need to create the "three40" directory
# nano three40.sh & paste the contents of the three40.sh file
# chmod +x three40.sh
# mkdir three40
# cd three40
# Create the root folder
# mkdir -p three40

# nano three40/_toc.yml & paste the contents of the _toc.yml file

# Create the "intro.ipynb" file inside the "root" folder
touch three40/intro.ipynb

# Function to read the chapters from the YAML file using pure bash
get_chapters_from_yaml() {
  local part="$1"
  local toc_file="three40/_toc.yml"
  local lines
  local in_part=false

  while read -r line; do
    if [[ "$line" == *"$part"* ]]; then
      in_part=true
    elif [[ "$line" == *"- File: "* ]]; then
      if "$in_part"; then
        echo "$line" | awk -F': ' '{print $2}' | tr -d ' '
      fi
    elif [[ "$line" == *"-"* ]]; then
      in_part=false
    fi
  done < "$toc_file"
}

# Create parts and chapters based on the _toc.yml structure
parts=(
  "Root"
  "Act I"
  "Act II"
  "Act III"
  "Act IV"
  "Act V"
  "Epilogue"
  "Git & Spoke"
  "Courses"
)

# Loop through parts and create chapters inside each part folder
for part in "${parts[@]}"; do
  part_folder="three40/$part"
  mkdir -p "$part_folder"

  if [[ "$part" == "Root" ]]; then
    # Create the "prologue.ipynb" file inside the "Root" folder
    touch "$part_folder/prologue.ipynb"
  else
    # Get the chapters for the current part from the _toc.yml
    chapters=($(get_chapters_from_yaml "$part"))

    # Create chapter files inside the part folder
    for chapter in "${chapters[@]}"; do
      # Extract the act number and create the act folder
      act=$(echo "$chapter" | cut -d '/' -f 3)
      act_folder="$part_folder/Act $act"
      mkdir -p "$act_folder"

      # Create the chapter file inside the act folder
      touch "$act_folder/$(basename "$chapter" .ipynb).ipynb"
    done
  fi
done

# Create the "dramatispersonae" folder and its subdirectories with loop
dramatispersonae_folders=(
  "high_school_students"
  "undergraduates"
  "graduates"
  "medical_students"
  "residents"
  "fellows"
  "faculty"
  "analysts"
  "staff"
  "collaborators"
)

for folder in "${dramatispersonae_folders[@]}"; do
  mkdir -p "three40/dramatispersonae/$folder"
  touch "three40/dramatispersonae/$folder/$folder.ipynb"
done

# Create additional .ipynb files inside specific subdirectories
touch "three40/dramatispersonae/high_school_students/calvin_mathews/calvin_mathews.ipynb"
touch "three40/dramatispersonae/medical_students/andrew_arking/andrew_arking.ipynb"
touch "three40/dramatispersonae/medical_students/andrew_arking/andrew_arking_1.ipynb"
touch "three40/dramatispersonae/collaborators/fawaz_al_ammary/fawaz_al_ammary.ipynb"
touch "three40/dramatispersonae/collaborators/fawaz_al_ammary/fawaz_al_ammary_1.ipynb"

echo "Folder structure has been created successfully."

  

846.3. _toc.yaml#

format: jb-book
root: intro.ipynb
title: Play

parts:
- caption: 
  chapters:
  - file: prologue.ipynb

- caption: Act I
  chapters:
  - file: Act I/act1_1.ipynb
  - file: Act I/act1_2.ipynb
  - file: Act I/act1_3.ipynb

- caption: Act II
  chapters:
  - file: Act II/act2_1.ipynb
  - file: Act II/act2_2.ipynb
  - file: Act II/act2_3.ipynb
  - file: Act II/act2_4.ipynb

- caption: Act III
  chapters:
  - file: Act III/act3_1.ipynb
  - file: Act III/act3_2.ipynb
  - file: Act III/act3_3.ipynb
  - file: Act III/act3_4.ipynb
  - file: Act III/act3_5.ipynb

- caption: Act IV
  chapters:
  - file: Act IV/act4_1.ipynb
  - file: Act IV/act4_2.ipynb
  - file: Act IV/act4_3.ipynb
  - file: Act IV/act4_4.ipynb
  - file: Act IV/act4_5.ipynb
  - file: Act IV/act4_6.ipynb

- caption: Act V
  chapters:
  - file: Act V/act5_1.ipynb
  - file: Act V/act5_2.ipynb
  - file: Act V/act5_3.ipynb
  - file: Act V/act5_4.ipynb
  - file: Act V/act5_5.ipynb
  - file: Act V/act5_6.ipynb

- caption: Epilogue
  chapters:
  - file: Epilogue/epi_1.ipynb
  - file: Epilogue/epi_2.ipynb
  - file: Epilogue/epi_3.ipynb
  - file: Epilogue/epi_4.ipynb
  - file: Epilogue/epi_5.ipynb
  - file: Epilogue/epi_6.ipynb
  - file: Epilogue/epi_7.ipynb
  - file: Epilogue/epi_8.ipynb

- caption: Gas & Spoke
  chapters:
  - file: Gas & Spoke/gas_1.ipynb
  - file: Gas & Spoke/gas_2.ipynb
  - file: Gas & Spoke/gas_3.ipynb

- caption: Courses
  chapters: 
  - url: https://publichealth.jhu.edu/courses
    title: Stata Programming 
  - file: dramatis_personae/high_school_students/high_school_students.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_1.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_2.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_3.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_4.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_5.ipynb
  - file: dramatis_personae/under_grads/under_grads.ipynb
  - file: dramatis_personae/under_grads/under_grads_1.ipynb
  - file: dramatis_personae/under_grads/under_grads_2.ipynb
  - file: dramatis_personae/under_grads/under_grads_3.ipynb
  - file: dramatis_personae/under_grads/under_grads_4.ipynb
  - file: dramatis_personae/under_grads/under_grads_5.ipynb
  - file: dramatis_personae/grad_students/grad_students.ipynb
  - file: dramatis_personae/grad_students_1/grad_students_1.ipynb
  - file: dramatis_personae/grad_students_2/grad_students_2.ipynb
  - file: dramatis_personae/grad_students_3/grad_students_3.ipynb
  - file: dramatis_personae/grad_students_4/grad_students_4.ipynb
  - file: dramatis_personae/grad_students_5/grad_students_5.ipynb
  - file: dramatis_personae/medical_students/medical_students.ipynb
  - file: dramatis_personae/medical_students/medical_students_1.ipynb
  - file: dramatis_personae/medical_students/medical_students_2.ipynb
  - file: dramatis_personae/medical_students/medical_students_3.ipynb
  - file: dramatis_personae/medical_students/medical_students_4.ipynb
  - file: dramatis_personae/medical_students/medical_students_5.ipynb
  - file: dramatis_personae/residents/residents.ipynb
  - file: dramatis_personae/residents/residents_1.ipynb
  - file: dramatis_personae/residents/residents_2.ipynb
  - file: dramatis_personae/residents/residents_3.ipynb
  - file: dramatis_personae/residents/residents_4.ipynb
  - file: dramatis_personae/residents/residents_5.ipynb
  - file: dramatis_personae/fellows/fellows.ipynb
  - file: dramatis_personae/fellows/fellows_1.ipynb
  - file: dramatis_personae/fellows/fellows_2.ipynb
  - file: dramatis_personae/fellows/fellows_3.ipynb
  - file: dramatis_personae/fellows/fellows_4.ipynb
  - file: dramatis_personae/fellows/fellows_5.ipynb
  - file: dramatis_personae/faculty/faculty.ipynb
  - file: dramatis_personae/faculty/faculty_1.ipynb
  - file: dramatis_personae/faculty/faculty_2.ipynb
  - file: dramatis_personae/faculty/faculty_3.ipynb
  - file: dramatis_personae/faculty/faculty_4.ipynb
  - file: dramatis_personae/faculty/faculty_5.ipynb
  - file: dramatis_personae/analysts/analysts.ipynb
  - file: dramatis_personae/analysts/analysts_1.ipynb
  - file: dramatis_personae/analysts/analysts_2.ipynb
  - file: dramatis_personae/analysts/analysts_3.ipynb
  - file: dramatis_personae/analysts/analysts_4.ipynb
  - file: dramatis_personae/analysts/analysts_5.ipynb
  - file: dramatis_personae/staff/staff.ipynb
  - file: dramatis_personae/staff/staff_1.ipynb
  - file: dramatis_personae/staff/staff_2.ipynb
  - file: dramatis_personae/staff/staff_3.ipynb
  - file: dramatis_personae/staff/staff_4.ipynb
  - file: dramatis_personae/staff/staff_5.ipynb
  - file: dramatis_personae/collaborators/collaborators.ipynb
  - file: dramatis_personae/collaborators/collaborators_1.ipynb
  - file: dramatis_personae/collaborators/collaborators_2.ipynb
  - file: dramatis_personae/collaborators/collaborators_3.ipynb
  - file: dramatis_personae/collaborators/collaborators_4.ipynb
  - file: dramatis_personae/collaborators/collaborators_5.ipynb
  - file: dramatis_personae/graduates/graduates.ipynb
  - file: dramatis_personae/graduates/graduates_1.ipynb
  - file: dramatis_personae/graduates/graduates_2.ipynb
  - file: dramatis_personae/graduates/graduates_3.ipynb
  - file: dramatis_personae/graduates/graduates_4.ipynb
  - file: dramatis_personae/graduates/graduates_5.ipynb


847. in-a-nutshell#

  • do i just codify the entire 07/01/2006 - 07/02/2023?

  • the entire 17 years of my jhu life?

  • if so then this revolution will be televised!

  • not another soul will be lost to the abyss of the unknowable

  • let them find other ways to get lost, other lifetasks to complete

848. notfancybutworks#

#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Create the "three40" directory
mkdir -p three40

# Create the "Root" folder and the "intro.ipynb" file inside it
mkdir -p "three40/Root"
touch "three40/Root/intro.ipynb"

# Create the "prologue.ipynb" file in the "three40" directory
touch "three40/prologue.ipynb"

# Create "Act I" folder and its subfiles
mkdir -p "three40/Act I"
touch "three40/Act I/act1_1.ipynb"
touch "three40/Act I/act1_2.ipynb"
touch "three40/Act I/act1_3.ipynb"

# Create "Act II" folder and its subfiles
mkdir -p "three40/Act II"
touch "three40/Act II/act2_1.ipynb"
touch "three40/Act II/act2_2.ipynb"
touch "three40/Act II/act2_3.ipynb"
touch "three40/Act II/act2_4.ipynb"

# Create "Act III" folder and its subfiles
mkdir -p "three40/Act III"
touch "three40/Act III/act3_1.ipynb"
touch "three40/Act III/act3_2.ipynb"
touch "three40/Act III/act3_3.ipynb"
touch "three40/Act III/act3_4.ipynb"
touch "three40/Act III/act3_5.ipynb"

# Create "Act IV" folder and its subfiles
mkdir -p "three40/Act IV"
touch "three40/Act IV/act4_1.ipynb"
touch "three40/Act IV/act4_2.ipynb"
touch "three40/Act IV/act4_3.ipynb"
touch "three40/Act IV/act4_4.ipynb"
touch "three40/Act IV/act4_5.ipynb"
touch "three40/Act IV/act4_6.ipynb"

# Create "Act V" folder and its subfiles
mkdir -p "three40/Act V"
touch "three40/Act V/act5_1.ipynb"
touch "three40/Act V/act5_2.ipynb"
touch "three40/Act V/act5_3.ipynb"
touch "three40/Act V/act5_4.ipynb"
touch "three40/Act V/act5_5.ipynb"
touch "three40/Act V/act5_6.ipynb"

# Create "Epilogue" folder and its subfiles
mkdir -p "three40/Epilogue"
touch "three40/Epilogue/epi_1.ipynb"
touch "three40/Epilogue/epi_2.ipynb"
touch "three40/Epilogue/epi_3.ipynb"
touch "three40/Epilogue/epi_4.ipynb"
touch "three40/Epilogue/epi_5.ipynb"
touch "three40/Epilogue/epi_6.ipynb"
touch "three40/Epilogue/epi_7.ipynb"
touch "three40/Epilogue/epi_8.ipynb"

# Create "Git & Spoke" folder and its subfiles
mkdir -p "three40/Git & Spoke"
touch "three40/Git & Spoke/gas_1.ipynb"
touch "three40/Git & Spoke/gas_2.ipynb"
touch "three40/Git & Spoke/gas_3.ipynb"

# Create "Courses" folder and its subfiles
mkdir -p "three40/Courses"
touch "three40/Courses/course1.ipynb"
touch "three40/Courses/course2.ipynb"

# Create "dramatispersonae" folder and its subdirectories
mkdir -p "three40/dramatispersonae/high_school_students"
mkdir -p "three40/dramatispersonae/undergraduates"
mkdir -p "three40/dramatispersonae/graduates"
mkdir -p "three40/dramatispersonae/medical_students"
mkdir -p "three40/dramatispersonae/residents"
mkdir -p "three40/dramatispersonae/fellows"
mkdir -p "three40/dramatispersonae/faculty"
mkdir -p "three40/dramatispersonae/analysts"
mkdir -p "three40/dramatispersonae/staff"
mkdir -p "three40/dramatispersonae/collaborators"

# Create "dramatispersonae" subdirectories with suffixes _1 to _5
for branch in high_school_students undergraduates graduates medical_students residents fellows faculty analysts staff collaborators; do
    for ((i=1; i<=5; i++)); do
        mkdir -p "three40/dramatispersonae/${branch}/${branch}_${i}"
    done
done

# Create additional .ipynb files inside specific subdirectories
touch "three40/dramatispersonae/high_school_students/high_school_students.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates.ipynb"
touch "three40/dramatispersonae/graduates/graduates.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students.ipynb"
touch "three40/dramatispersonae/residents/residents.ipynb"
touch "three40/dramatispersonae/fellows/fellows.ipynb"
touch "three40/dramatispersonae/faculty/faculty.ipynb"
touch "three40/dramatispersonae/analysts/analysts.ipynb"
touch "three40/dramatispersonae/staff/staff.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_1.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_2.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_3.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_4.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_5.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_1.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_2.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_3.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_4.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_5.ipynb"
touch "three40/dramatispersonae/graduates/graduates_1.ipynb"
touch "three40/dramatispersonae/graduates/graduates_2.ipynb"
touch "three40/dramatispersonae/graduates/graduates_3.ipynb"
touch "three40/dramatispersonae/graduates/graduates_4.ipynb"
touch "three40/dramatispersonae/graduates/graduates_5.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_1.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_2.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_3.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_4.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_5.ipynb"
touch "three40/dramatispersonae/residents/residents_1.ipynb"
touch "three40/dramatispersonae/residents/residents_2.ipynb"
touch "three40/dramatispersonae/residents/residents_3.ipynb"
touch "three40/dramatispersonae/residents/residents_4.ipynb"
touch "three40/dramatispersonae/residents/residents_5.ipynb"
touch "three40/dramatispersonae/fellows/fellows_1.ipynb"
touch "three40/dramatispersonae/fellows/fellows_2.ipynb"
touch "three40/dramatispersonae/fellows/fellows_3.ipynb"
touch "three40/dramatispersonae/fellows/fellows_4.ipynb"
touch "three40/dramatispersonae/fellows/fellows_5.ipynb"
touch "three40/dramatispersonae/faculty/faculty_1.ipynb"
touch "three40/dramatispersonae/faculty/faculty_2.ipynb"
touch "three40/dramatispersonae/faculty/faculty_3.ipynb"
touch "three40/dramatispersonae/faculty/faculty_4.ipynb"
touch "three40/dramatispersonae/faculty/faculty_5.ipynb"
touch "three40/dramatispersonae/analysts/analysts_1.ipynb"
touch "three40/dramatispersonae/analysts/analysts_2.ipynb"
touch "three40/dramatispersonae/analysts/analysts_3.ipynb"
touch "three40/dramatispersonae/analysts/analysts_4.ipynb"
touch "three40/dramatispersonae/analysts/analysts_5.ipynb"
touch "three40/dramatispersonae/staff/staff_1.ipynb"
touch "three40/dramatispersonae/staff/staff_2.ipynb"
touch "three40/dramatispersonae/staff/staff_3.ipynb"
touch "three40/dramatispersonae/staff/staff_4.ipynb"
touch "three40/dramatispersonae/staff/staff_5.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_1.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_2.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_3.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_4.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_5.ipynb"

# Display the directory tree
echo "Directory Structure:"
echo "-------------------"
tree three40

echo "Folder structure has been created successfully."

849. stillgotit#

I just met my life, yeah
For the first time
I just met the prettiest girl, in the whole wide world


I just met my wife, yeah
For the first time
I just met the partiest girl, in the whole wide world


talk of misdirection

850. openai#

Prego! Di nulla! If you have any more questions or need further assistance, feel free to ask. Buona giornata! (Have a great day!)

\(\vdots\)

Great to hear! If you ever need any help or have more questions in the future, don’t hesitate to reach out. Have a wonderful day!

851. babyface#

  • verbs:

  • nemesis:

    • guys whose verb is to do

    • athletes

    • yeah.. get the picture?

852. asante#

  • ghanian food in gettysburg, maryland:

    • rambow

    • savannah

  • what to order:

    • banku with tilapia or roca

    • fufu with goat meat

    • jollof rice with chicken

    • peanut butter soup with rice balls (with mutu - rice gun)

    • black pepper soup (shito) & okra soup

    • kenkey with fish

  • visit willo & phat:

    • let phat rest

    • no kitchen time

    • all on me!

  • courtesy of:

    • (240) 516-4535

    • james obour (stone)

853. ghana#

  • thc-dn-64707106/etbjhmf_lx_rsdorhrsdq_hm_sgd_vghkd_gnxeqhdmc_cndrm_s_jmnv

  • best west african food is from the following countries in order:

    • ghana

    • nigeria

    • senegal

    • ivory coast

    • mali

    • guinea

    • burkina faso

    • togo

    • benin

    • gambia

    • sierra leone

    • liberia

    • guinea-bissau

    • cape verde

  • github co-pilot suggested invory coast as number 2

  • i disagree, as do most west africans

  • some of us have eaten food from all of these countries

854. Stata#

Hi Laura,

I hope this finds you well. Just wanted to drop some ideas by you:

After careful consideration of the course evaluations over the last two years I’m of the opinion that there should be three Stata Programming classes These three classes would reflect the diverse backgrounds of the Bloomberg School students What would be the defining characteristics of each of these classes?

i) Observed data
ii) Expected data
iii) Simulated data

These may seem somewhat abstract concepts but they would allow me to better communicate some fundamental concepts with students. The idea would be to have Stata Programming I (observed data) as the basic class. Stata Programming II (Expected data) as the intermediate class. And Stata Programming III (Simulated data) as the advanced class. I already have some additional syllabus material for each of these. But, in brief, all would be offered as hybrid. The basic class would be 2 credit units. And the intermediate and advanced classes would each be 1 credit unit.

Any thoughts?

Abi

855. chances#

  • migos

  • yrn 2

  • romantic

  • dance

  • panic

  • fancy

  • outstanding

  • i took a whole lot..

  • bandos

856. Atliens#

  • throw your hands in the air

  • and if you like fish and grits?

  • every body let me hear you say.. oh yeah yurr

857. g.o.a.t.#

  • man’s greatest invention

  • a mere 88 keys

  • piano

08/03/2023#

858. igotthat#

  • warryn campbell

  • erica campbell

  • still got it

859. yesterday#

  • workflow: rollover jupyter-book create .

  • herein we take 100% control of the toc.yml file

  • lets see how it all comes together:

859.1 terminal#

cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
mkdir -p three40
nano three40/_toc.yml

859.2 paste#

format: jb-book
root: intro.ipynb
title: Play

parts:
- caption: Prologue
  chapters:
  - file: prologue.ipynb

- caption: Act I
  chapters:
  - file: Act I/act1_1.ipynb
  - file: Act I/act1_2.ipynb
  - file: Act I/act1_3.ipynb

- caption: Act II
  chapters:
  - file: Act II/act2_1.ipynb
  - file: Act II/act2_2.ipynb
  - file: Act II/act2_3.ipynb
  - file: Act II/act2_4.ipynb

- caption: Act III
  chapters:
  - file: Act III/act3_1.ipynb
  - file: Act III/act3_2.ipynb
  - file: Act III/act3_3.ipynb
  - file: Act III/act3_4.ipynb
  - file: Act III/act3_5.ipynb

- caption: Act IV
  chapters:
  - file: Act IV/act4_1.ipynb
  - file: Act IV/act4_2.ipynb
  - file: Act IV/act4_3.ipynb
  - file: Act IV/act4_4.ipynb
  - file: Act IV/act4_5.ipynb
  - file: Act IV/act4_6.ipynb

- caption: Act V
  chapters:
  - file: Act V/act5_1.ipynb
  - file: Act V/act5_2.ipynb
  - file: Act V/act5_3.ipynb
  - file: Act V/act5_4.ipynb
  - file: Act V/act5_5.ipynb
  - file: Act V/act5_6.ipynb

- caption: Epilogue
  chapters:
  - file: Epilogue/epi_1.ipynb
  - file: Epilogue/epi_2.ipynb
  - file: Epilogue/epi_3.ipynb
  - file: Epilogue/epi_4.ipynb
  - file: Epilogue/epi_5.ipynb
  - file: Epilogue/epi_6.ipynb
  - file: Epilogue/epi_7.ipynb
  - file: Epilogue/epi_8.ipynb

- caption: Gas & Spoke
  chapters:
  - file: Gas & Spoke/gas_1.ipynb
  - file: Gas & Spoke/gas_2.ipynb
  - file: Gas & Spoke/gas_3.ipynb

- caption: Courses
  chapters: 
  - url: https://publichealth.jhu.edu/courses
    title: Stata Programming 
  - file: dramatispersonae/high_school_students/high_school_students.ipynb
  - file: dramatispersonae/high_school_students/high_school_students_1.ipynb
  - file: dramatispersonae/high_school_students/high_school_students_2.ipynb
  - file: dramatispersonae/high_school_students/high_school_students_3.ipynb
  - file: dramatispersonae/high_school_students/high_school_students_4.ipynb
  - file: dramatispersonae/high_school_students/high_school_students_5.ipynb
  - file: dramatispersonae/undergraduates/undergraduates.ipynb
  - file: dramatispersonae/undergraduates/undergraduates_1.ipynb
  - file: dramatispersonae/undergraduates/undergraduates_2.ipynb
  - file: dramatispersonae/undergraduates/undergraduates_3.ipynb
  - file: dramatispersonae/undergraduates/undergraduates_4.ipynb
  - file: dramatispersonae/undergraduates/undergraduates_5.ipynb
  - file: dramatispersonae/graduate_students/graduate_students.ipynb
  - file: dramatispersonae/graduate_students/graduate_students_1.ipynb
  - file: dramatispersonae/graduate_students/graduate_students_2.ipynb
  - file: dramatispersonae/graduate_students/graduate_students_3.ipynb
  - file: dramatispersonae/graduate_students/graduate_students_4.ipynb
  - file: dramatispersonae/graduate_students/graduate_students_5.ipynb
  - file: dramatispersonae/medical_students/medical_students.ipynb
  - file: dramatispersonae/medical_students/medical_students_1.ipynb
  - file: dramatispersonae/medical_students/medical_students_2.ipynb
  - file: dramatispersonae/medical_students/medical_students_3.ipynb
  - file: dramatispersonae/medical_students/medical_students_4.ipynb
  - file: dramatispersonae/medical_students/medical_students_5.ipynb
  - file: dramatispersonae/residents/residents.ipynb
  - file: dramatispersonae/residents/residents_1.ipynb
  - file: dramatispersonae/residents/residents_2.ipynb
  - file: dramatispersonae/residents/residents_3.ipynb
  - file: dramatispersonae/residents/residents_4.ipynb
  - file: dramatispersonae/residents/residents_5.ipynb
  - file: dramatispersonae/fellows/fellows.ipynb
  - file: dramatispersonae/fellows/fellows_1.ipynb
  - file: dramatispersonae/fellows/fellows_2.ipynb
  - file: dramatispersonae/fellows/fellows_3.ipynb
  - file: dramatispersonae/fellows/fellows_4.ipynb
  - file: dramatispersonae/fellows/fellows_5.ipynb
  - file: dramatispersonae/faculty/faculty.ipynb
  - file: dramatispersonae/faculty/faculty_1.ipynb
  - file: dramatispersonae/faculty/faculty_2.ipynb
  - file: dramatispersonae/faculty/faculty_3.ipynb
  - file: dramatispersonae/faculty/faculty_4.ipynb
  - file: dramatispersonae/faculty/faculty_5.ipynb
  - file: dramatispersonae/analysts/analysts.ipynb
  - file: dramatispersonae/analysts/analysts_1.ipynb
  - file: dramatispersonae/analysts/analysts_2.ipynb
  - file: dramatispersonae/analysts/analysts_3.ipynb
  - file: dramatispersonae/analysts/analysts_4.ipynb
  - file: dramatispersonae/analysts/analysts_5.ipynb
  - file: dramatispersonae/staff/staff.ipynb
  - file: dramatispersonae/staff/staff_1.ipynb
  - file: dramatispersonae/staff/staff_2.ipynb
  - file: dramatispersonae/staff/staff_3.ipynb
  - file: dramatispersonae/staff/staff_4.ipynb
  - file: dramatispersonae/staff/staff_5.ipynb
  - file: dramatispersonae/collaborators/collaborators.ipynb
  - file: dramatispersonae/collaborators/collaborators_1.ipynb
  - file: dramatispersonae/collaborators/collaborators_2.ipynb
  - file: dramatispersonae/collaborators/collaborators_3.ipynb
  - file: dramatispersonae/collaborators/collaborators_4.ipynb
  - file: dramatispersonae/collaborators/collaborators_5.ipynb
  - file: dramatispersonae/graduates/graduates.ipynb
  - file: dramatispersonae/graduates/graduates_1.ipynb
  - file: dramatispersonae/graduates/graduates_2.ipynb
  - file: dramatispersonae/graduates/graduates_3.ipynb
  - file: dramatispersonae/graduates/graduates_4.ipynb
  - file: dramatispersonae/graduates/graduates_5.ipynb

859.3 bash#

./three40.sh

859.4 tree#

#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Create the "three40" directory
# mkdir -p three40
# nano three40/_toc.yml

# Create the "Root" folder and the "intro.ipynb" file inside it
touch "three40/intro.ipynb"

# Create the "prologue.ipynb" file in the "three40" directory
touch "three40/prologue.ipynb"

# Create "Act I" folder and its subfiles
mkdir -p "three40/Act I"
touch "three40/Act I/act1_1.ipynb"
touch "three40/Act I/act1_2.ipynb"
touch "three40/Act I/act1_3.ipynb"

# Create "Act II" folder and its subfiles
mkdir -p "three40/Act II"
touch "three40/Act II/act2_1.ipynb"
touch "three40/Act II/act2_2.ipynb"
touch "three40/Act II/act2_3.ipynb"
touch "three40/Act II/act2_4.ipynb"

# Create "Act III" folder and its subfiles
mkdir -p "three40/Act III"
touch "three40/Act III/act3_1.ipynb"
touch "three40/Act III/act3_2.ipynb"
touch "three40/Act III/act3_3.ipynb"
touch "three40/Act III/act3_4.ipynb"
touch "three40/Act III/act3_5.ipynb"

# Create "Act IV" folder and its subfiles
mkdir -p "three40/Act IV"
touch "three40/Act IV/act4_1.ipynb"
touch "three40/Act IV/act4_2.ipynb"
touch "three40/Act IV/act4_3.ipynb"
touch "three40/Act IV/act4_4.ipynb"
touch "three40/Act IV/act4_5.ipynb"
touch "three40/Act IV/act4_6.ipynb"

# Create "Act V" folder and its subfiles
mkdir -p "three40/Act V"
touch "three40/Act V/act5_1.ipynb"
touch "three40/Act V/act5_2.ipynb"
touch "three40/Act V/act5_3.ipynb"
touch "three40/Act V/act5_4.ipynb"
touch "three40/Act V/act5_5.ipynb"
touch "three40/Act V/act5_6.ipynb"

# Create "Epilogue" folder and its subfiles
mkdir -p "three40/Epilogue"
touch "three40/Epilogue/epi_1.ipynb"
touch "three40/Epilogue/epi_2.ipynb"
touch "three40/Epilogue/epi_3.ipynb"
touch "three40/Epilogue/epi_4.ipynb"
touch "three40/Epilogue/epi_5.ipynb"
touch "three40/Epilogue/epi_6.ipynb"
touch "three40/Epilogue/epi_7.ipynb"
touch "three40/Epilogue/epi_8.ipynb"

# Create "Git & Spoke" folder and its subfiles
mkdir -p "three40/Git & Spoke"
touch "three40/Git & Spoke/gas_1.ipynb"
touch "three40/Git & Spoke/gas_2.ipynb"
touch "three40/Git & Spoke/gas_3.ipynb"

# Create "Courses" folder and its subfiles
mkdir -p "three40/Courses"
touch "three40/Courses/course1.ipynb"
touch "three40/Courses/course2.ipynb"

# Create "dramatispersonae" folder and its subdirectories
mkdir -p "three40/dramatispersonae/high_school_students"
mkdir -p "three40/dramatispersonae/undergraduates"
mkdir -p "three40/dramatispersonae/graduates"
mkdir -p "three40/dramatispersonae/medical_students"
mkdir -p "three40/dramatispersonae/residents"
mkdir -p "three40/dramatispersonae/fellows"
mkdir -p "three40/dramatispersonae/faculty"
mkdir -p "three40/dramatispersonae/analysts"
mkdir -p "three40/dramatispersonae/staff"
mkdir -p "three40/dramatispersonae/collaborators"

# Create "dramatispersonae" subdirectories with suffixes _1 to _5
for branch in high_school_students undergraduates graduates medical_students residents fellows faculty analysts staff collaborators; do
    for ((i=1; i<=5; i++)); do
        mkdir -p "three40/dramatispersonae/${branch}/${branch}_${i}"
    done
done

# Create additional .ipynb files inside specific subdirectories
touch "three40/dramatispersonae/high_school_students/high_school_students.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates.ipynb"
touch "three40/dramatispersonae/graduates/graduates.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students.ipynb"
touch "three40/dramatispersonae/residents/residents.ipynb"
touch "three40/dramatispersonae/fellows/fellows.ipynb"
touch "three40/dramatispersonae/faculty/faculty.ipynb"
touch "three40/dramatispersonae/analysts/analysts.ipynb"
touch "three40/dramatispersonae/staff/staff.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_1.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_2.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_3.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_4.ipynb"
touch "three40/dramatispersonae/high_school_students/high_school_students_5.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_1.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_2.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_3.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_4.ipynb"
touch "three40/dramatispersonae/undergraduates/undergraduates_5.ipynb"
touch "three40/dramatispersonae/graduates/graduates_1.ipynb"
touch "three40/dramatispersonae/graduates/graduates_2.ipynb"
touch "three40/dramatispersonae/graduates/graduates_3.ipynb"
touch "three40/dramatispersonae/graduates/graduates_4.ipynb"
touch "three40/dramatispersonae/graduates/graduates_5.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_1.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_2.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_3.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_4.ipynb"
touch "three40/dramatispersonae/medical_students/medical_students_5.ipynb"
touch "three40/dramatispersonae/residents/residents_1.ipynb"
touch "three40/dramatispersonae/residents/residents_2.ipynb"
touch "three40/dramatispersonae/residents/residents_3.ipynb"
touch "three40/dramatispersonae/residents/residents_4.ipynb"
touch "three40/dramatispersonae/residents/residents_5.ipynb"
touch "three40/dramatispersonae/fellows/fellows_1.ipynb"
touch "three40/dramatispersonae/fellows/fellows_2.ipynb"
touch "three40/dramatispersonae/fellows/fellows_3.ipynb"
touch "three40/dramatispersonae/fellows/fellows_4.ipynb"
touch "three40/dramatispersonae/fellows/fellows_5.ipynb"
touch "three40/dramatispersonae/faculty/faculty_1.ipynb"
touch "three40/dramatispersonae/faculty/faculty_2.ipynb"
touch "three40/dramatispersonae/faculty/faculty_3.ipynb"
touch "three40/dramatispersonae/faculty/faculty_4.ipynb"
touch "three40/dramatispersonae/faculty/faculty_5.ipynb"
touch "three40/dramatispersonae/analysts/analysts_1.ipynb"
touch "three40/dramatispersonae/analysts/analysts_2.ipynb"
touch "three40/dramatispersonae/analysts/analysts_3.ipynb"
touch "three40/dramatispersonae/analysts/analysts_4.ipynb"
touch "three40/dramatispersonae/analysts/analysts_5.ipynb"
touch "three40/dramatispersonae/staff/staff_1.ipynb"
touch "three40/dramatispersonae/staff/staff_2.ipynb"
touch "three40/dramatispersonae/staff/staff_3.ipynb"
touch "three40/dramatispersonae/staff/staff_4.ipynb"
touch "three40/dramatispersonae/staff/staff_5.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_1.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_2.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_3.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_4.ipynb"
touch "three40/dramatispersonae/collaborators/collaborators_5.ipynb"

# Display the directory tree
echo "Directory Structure:"
echo "-------------------"
echo "three40/
├── intro.ipynb
├── prologue.ipynb
├── Act I/
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── dramatispersonae/
    ├── high_school_students/
    │   ├── high_school_students_1/
    │   │   └── ...
    │   ├── high_school_students_2/
    │   │   └── ...
    │   ├── high_school_students_3/
    │   │   └── ...
    │   ├── high_school_students_4/
    │   │   └── ...
    │   └── high_school_students_5/
    │       └── ...
    ├── undergraduates/
    │   ├── undergraduates_1/
    │   │   └── ...
    │   ├── undergraduates_2/
    │   │   └── ...
    │   ├── undergraduates_3/
    │   │   └── ...
    │   ├── undergraduates_4/
    │   │   └── ...
    │   └── undergraduates_5/
    │       └── ...
    ├── graduates/
    │   ├── graduates_1/
    │   │   └── ...
    │   ├── graduates_2/
    │   │   └── ...
    │   ├── graduates_3/
    │   │   └── ...
    │   ├── graduates_4/
    │   │   └── ...
    │   └── graduates_5/
    │       └── ...
    ├── medical_students/
    │   ├── medical_students_1/
    │   │   └── ...
    │   ├── medical_students_2/
    │   │   └── ...
    │   ├── medical_students_3/
    │   │   └── ...
    │   ├── medical_students_4/
    │   │   └── ...
    │   └── medical_students_5/
    │       └── ...
    ├── residents/
    │   ├── residents_1/
    │   │   └── ...
    │   ├── residents_2/
    │   │   └── ...
    │   ├── residents_3/
    │   │   └── ...
    │   ├── residents_4/
    │   │   └── ...
    │   └── residents_5/
    │       └── ...
    ├── fellows/
    │   ├── fellows_1/
    │   │   └── ...
    │   ├── fellows_2/
    │   │   └── ...
    │   ├── fellows_3/
    │   │   └── ...
    │   ├── fellows_4/
    │   │   └── ...
    │   └── fellows_5/
    │       └── ...
    ├── faculty/
    │   ├── faculty_1/
    │   │   └── ...
    │   ├── faculty_2/
    │   │   └── ...
    │   ├── faculty_3/
    │   │   └── ...
    │   ├── faculty_4/
    │   │   └── ...
    │   └── faculty_5/
    │       └── ...
    ├── analysts/
    │   ├── analysts_1/
    │   │   └── ...
    │   ├── analysts_2/
    │   │   └── ...
    │   ├── analysts_3/
    │   │   └── ...
    │   ├── analysts_4/
    │   │   └── ...
    │   └── analysts_5/
    │       └── ...
    ├── staff/
    │   ├── staff_1/
    │   │   └── ...
    │   ├── staff_2/
    │   │   └── ...
    │   ├── staff_3/
    │   │   └── ...
    │   ├── staff_4/
    │   │   └── ...
    │   └── staff_5/
    │       └── ...
    └── collaborators/
        ├── collaborators_1/
        │   └── ...
        ├── collaborators_2/
        │   └── ...
        ├── collaborators_3/
        │   └── ...
        ├── collaborators_4/
        │   └── ...
        └── collaborators_5/
            └── ..."
echo "Folder structure has been created successfully."
mv three40.sh three40/three40.sh

859.4#

  • create a workdir-gitrepo .sh file

  • build your .html

  • push to github

860. pwomd#

cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
mkdir -p three40
nano three40/three40.sh
chmod +x three40/three40.sh
nano three40/_toc.yml
nano three40/_config.yml
./three40/three40.sh
find . -name "*.ipynb" -exec cp "notebook.ipynb" {} \;
nano three40/three40.six100.sh
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
git clone https://github.com/jhustata/six100
jb build three40
cp -r three40/* six100
cd six100
git add ./*
git commit -m "first jb created manually"
git push
ghp-import -n -p -f _build/html
chmod +x three40/three40.six100.sh
./three40/three40.six100.sh

861. bloc/githistory.sh#

#!/bin/bash

# Function to reset to a clean state.
reset_state() {
    # Abort any ongoing rebase.
    git rebase --abort &> /dev/null && echo "Aborted an ongoing rebase."

    # Stash any unstaged changes to ensure operations can proceed.
    git stash save "Unstaged changes before running githis.sh" && echo "Stashed unsaved changes."

    # Remove any lingering rebase directories.
    if [ -d ".git/rebase-merge" ] || [ -d ".git/rebase-apply" ]; then
        rm -rf .git/rebase-*
        echo "Removed lingering rebase directories."
    fi
}

# Navigate to the main working directory.
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Navigate to the six100 directory.
cd six100 || { echo "Directory six100 does not exist. Exiting."; exit 1; }

# Reset to a clean state.
reset_state

# Fetch the latest changes from temp_og_repo using SSH.
if git fetch git@github.com:afecdvi/temp_og_repo.git main; then
    echo "Successfully fetched changes via SSH."
else
    echo "Failed to fetch changes using SSH. Exiting."
    exit 1
fi

# Reset the local branch to match the fetched changes.
git reset --hard FETCH_HEAD
echo "Local branch reset to match fetched changes."

# Check for network connection.
if ! ping -c 1 google.com &> /dev/null; then
    echo "No internet connection. Exiting."
    exit 1
fi

# Check repository size.
REPO_SIZE=$(du -sh .git | cut -f1)
echo "Repository size: $REPO_SIZE"

# Adjust Git configurations.
POST_BUFFER_SIZE=$(( (RANDOM % 200 + 300) * 1048576 ))
LOW_SPEED_LIMIT=$(( RANDOM % 5000 + 2000 ))
LOW_SPEED_TIME=$(( RANDOM % 60 + 30 ))

git config http.postBuffer $POST_BUFFER_SIZE
git config http.lowSpeedLimit $LOW_SPEED_LIMIT
git config http.lowSpeedTime $LOW_SPEED_TIME
echo "Adjusted Git's buffer size to $POST_BUFFER_SIZE, low speed limit to $LOW_SPEED_LIMIT and low speed time to $LOW_SPEED_TIME."

# Push the changes to the remote repository using SSH and verbose logging.
if git push git@github.com:afecdvi/og.git main --force -v; then
    echo "Successfully pushed changes using SSH."
    # Unstash any changes we stashed earlier.
    git stash pop &> /dev/null && echo "Restored previously stashed changes."
    echo "Script completed successfully!"
else
    echo "Failed to push changes even with SSH. Exiting."
    git stash pop &> /dev/null && echo "Restored previously stashed changes."
    exit 1
fi

862. conditionals#

  • if then else fi

    • if refers to the condition

    • then refers to the action

    • else refers to the alternative action

    • fi refers to the end of the conditional

  • case esac

    • case refers to the condition

    • esac refers to the end of the conditional

  • for do done

    • for refers to the condition

    • do refers to the action

    • done refers to the end of the conditional

  • while do done

    • while refers to the condition

    • do refers to the action

    • done refers to the end of the conditional

  • until do done

    • until refers to the condition

    • do refers to the action

    • done refers to the end of the conditional

863. stata#

  • let it be said that unconditional code is the most basic, conditional code is intermediate, and looping code is advanced

  • so let’s start with the most basic

  • if then else fi

    • if refers to the condition

    • then refers to the action

    • else refers to the alternative action

    • fi refers to the end of the conditional

  1. unconditional

clear
set obs 100
gen x = runiform()
display "x is greater than 0.5"

  1. conditional

if x > 0.5 {
    display "x is greater than 0.5"
}
else if x > 0.25 {
    display "x is greater than 0.25"
}
else if x > 0.125 {
    display "x is greater than 0.125"
}
else {
    display "x is less than or equal to 0.125"
}

  1. looping

forvalues i = 1/10 {
    if `i' < 5 {
        display "i = `i'"
    }
    else {
        display "i is greater than or equal to 5"
    }
}
  • we’ll build the three classes around these three basic concepts

864. bloc/blocdenotas.sh#

#!/bin/bash

# Ask the user for the path to the SSH key
read -p "Please provide the path to your SSH key (e.g. ~/.ssh/id_blocdenotas): " SSH_KEY_PATH

# If no input is provided, exit the script
if [[ -z "$SSH_KEY_PATH" ]]; then
    echo "No SSH key provided. Exiting."
    exit 1
fi

# Check if the SSH key exists
if [[ ! -f "$SSH_KEY_PATH" ]]; then
    echo "The provided SSH key does not exist. Exiting."
    exit 1
fi

# Change directory to ~/dropbox/1f.ἡἔρις,κ/1.ontology
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Build Jupyter Book
jb build bloc
cp -r bloc/* denotas

# Change directory to 'denotas'
cd denotas

# Add all files in the current directory to Git
git add ./*

# Commit changes to Git with the given commit message
git commit -m "introducing SSH keys to bloc/blocdenotas.sh"

# Use the provided SSH key for the upcoming Git commands
export GIT_SSH_COMMAND="ssh -i $SSH_KEY_PATH"

# Ensure using the SSH URL for the repository
git remote set-url origin git@github.com:muzaale/denotas.git

# Push changes to GitHub
git push origin main

# Import the built HTML to gh-pages and push to GitHub
ghp-import -n -p -f _build/html

# Unset the custom GIT_SSH_COMMAND to avoid affecting other git operations
unset GIT_SSH_COMMAND

865. bloc/blocdenotas.sh#

866. mb/og#

867. refine#

  • workflow 6.0

  • default ssh key

  • lets try this again

  • this hasn’t worked

  • get back to the basics

#!/bin/bash

# Default SSH Key path
DEFAULT_SSH_KEY_PATH="~/.ssh/id_blocdenotas"

# Prompt user for the path to their private SSH key
read -p "Enter the path to your private SSH key [default: $DEFAULT_SSH_KEY_PATH]: " SSH_KEY_PATH

# If user doesn't input anything, use the default
SSH_KEY_PATH=${SSH_KEY_PATH:-$DEFAULT_SSH_KEY_PATH}

if [[ ! -f "$SSH_KEY_PATH" ]]; then
    echo "Error: SSH key not found at $SSH_KEY_PATH."
    exit 1
fi

# Use the specified SSH key for git operations in this script
export GIT_SSH_COMMAND="ssh -i $SSH_KEY_PATH"

# Change directory to ~/dropbox/1f.ἡἔρις,κ/1.ontology
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Build Jupyter Book with 'gano' as the argument
jb build bloc
cp -r bloc/* denotas

# Change directory to 'denotas'
cd denotas

# Add all files in the current directory to Git
git add ./*

# Commit changes to Git with the given commit message
git commit -m "automate updates to denotas"

# Ensure using the SSH URL for the repository
git remote set-url origin git@github.com:muzaale/denotas

# Push changes to GitHub
git push 

# Import the built HTML to gh-pages and push to GitHub
ghp-import -n -p -f _build/html

868. reboot#

  • workflow 7.0

  • default ssh key

  • lets try this again

  • this hasn’t worked

  • but first status update

868.1. githistory.sh#

  1. I have a new repo: jhustata/six100

  2. There’s this file seasons.docx in its main branch

  3. Lets look at its git history:

History for six100/seasons.docx
Commits on Aug 3, 2023
import seasons.docx and later its .git history
@muzaale
muzaale committed 5 hours ago
End of commit history for this file
  1. Now I wish to transfer the git history from an older repo: afecdvi/og

  2. Here’s what it looks like:

History for og/seasons.docx
Commits on Aug 2, 2023
send this version to fawaz for review
@muzaale
muzaale committed yesterday
Commits on Aug 1, 2023
1. jon synder added as co-author 
@muzaale
muzaale committed 2 days ago
Commits on Jul 25, 2023
Feedback from Abi on 07/25/2023: mostly stylistic. consider Fourier s… 
@muzaale
muzaale committed last week
Commits on Jul 20, 2023
first & a half substantive edit of preface, hub/papers, seasons_*.doc… 
@muzaale
muzaale committed 2 weeks ago
End of commit history for this file
  1. Here’s my local machine:

(base) d@Poseidon 1.ontology % pwd
/Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology
(base) d@Poseidon 1.ontology % ls -l
total 0
drwxr-xr-x@  28 d  staff   896 Aug  3 16:56 _six100_
drwxr-xr-x@  21 d  staff   672 Jul 30 17:41 amagunju
drwxr-xr-x@ 276 d  staff  8832 Aug  3 15:54 bloc
drwxr-xr-x@  18 d  staff   576 Jul 18 04:47 buch
drwxr-xr-x@   4 d  staff   128 Aug  2 07:43 content
drwxr-xr-x@ 280 d  staff  8960 Aug  3 18:46 denotas
drwxr-xr-x@  80 d  staff  2560 Jul 29 08:52 fena
drwxr-xr-x@  15 d  staff   480 Aug  1 14:43 fenagas
drwxr-xr-x@  13 d  staff   416 Jul 28 20:00 ffena
drwxr-xr-x@  22 d  staff   704 Jul 30 16:26 gano
drwxr-xr-x@  13 d  staff   416 Jul 27 17:13 kelele
drwxr-xr-x@  29 d  staff   928 Jul 20 20:26 libro
drwxr-xr-x@ 144 d  staff  4608 Jun 23 23:20 livre
drwxr-xr-x@  14 d  staff   448 Aug  3 18:03 llc
drwxr-xr-x@  20 d  staff   640 Aug  2 13:18 mb
drwxr-xr-x@  12 d  staff   384 Jul 27 16:22 ngoma
drwxr-xr-x@  22 d  staff   704 Aug  1 12:59 og
drwxr-xr-x@  15 d  staff   480 Jul 31 01:05 repos
drwxr-xr-x@  42 d  staff  1344 Aug  3 20:41 six100
drwxr-xr-x@  18 d  staff   576 Jul 18 10:57 spring
drwxr-xr-x@ 139 d  staff  4448 Jun 25 08:29 summer
drwxr-xr-x@  22 d  staff   704 Aug  3 16:51 temp_og_repo
drwxr-xr-x@  26 d  staff   832 Aug  3 15:54 three40
drwxr-xr-x@  14 d  staff   448 Jul 31 06:24 track
drwxr-xr-x@ 102 d  staff  3264 Jul 29 09:28 tusirike
drwxr-xr-x@  25 d  staff   800 Jul 20 20:21 verano
drwxr-xr-x@  12 d  staff   384 Jul 28 19:59 yaffe
(base) d@Poseidon 1.ontology % 
  1. I want to transfer the git history from og/seasons.docx to six100/seasons.docx

  2. the directories corresponding are old (og) and new (six100)

  3. I’ll use the following command:

git filter-branch --index-filter \
'git ls-files -s | sed "s-\t\"*-&six100/-" |
GIT_INDEX_FILE=$GIT_INDEX_FILE.new \
git update-index --index-info &&
mv "$GIT_INDEX_FILE.new" "$GIT_INDEX_FILE"' HEAD

868.2. housekeeping#

(base) d@Poseidon 1.ontology % ls -l
total 0
drwxr-xr-x@  20 d  staff   640 Aug  4 00:20 blank
drwxr-xr-x@ 276 d  staff  8832 Aug  3 15:54 bloc
drwxr-xr-x@  21 d  staff   672 Aug  4 00:23 canvas
drwxr-xr-x@ 280 d  staff  8960 Aug  3 18:46 denotas
drwxr-xr-x@  15 d  staff   480 Aug  1 14:43 fenagas
drwxr-xr-x@  29 d  staff   928 Jul 20 20:26 libro
drwxr-xr-x@ 144 d  staff  4608 Jun 23 23:20 livre
drwxr-xr-x@  14 d  staff   448 Aug  3 18:03 llc
drwxr-xr-x@  20 d  staff   640 Aug  2 13:18 mb
drwxr-xr-x@  22 d  staff   704 Aug  1 12:59 og
drwxr-xr-x@  15 d  staff   480 Jul 31 01:05 repos
drwxr-xr-x@  18 d  staff   576 Jul 18 10:57 spring
drwxr-xr-x@ 139 d  staff  4448 Jun 25 08:29 summer
drwxr-xr-x@  14 d  staff   448 Jul 31 06:24 track
drwxr-xr-x@  25 d  staff   800 Jul 20 20:21 verano

08/04/2023#

869. victory#

#!/bin/bash

cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Ensure the script stops on first error
set -e

# 1. Remove the "og" directory
rm -rf og

# 2. Clone the "og" repository
git clone https://github.com/afecdvi/og

# 3. Navigate to "og" and generate patches for seasons.docx
cd og
echo "Generating patches for seasons.docx..."
git log --pretty=email --patch-with-stat --reverse -- seasons.docx > seasons.docx.patch

# 4. Remove the "canvas" directory and clone the new repository
cd ..
rm -rf canvas
git clone https://github.com/muzaale/canvas

# 5. Apply patches to the "canvas" repository
cd canvas
echo "Applying patches to canvas repository..."
git am < ../og/seasons.docx.patch

# 6. Setup for SSH push to "canvas" repository
echo "Setting up SSH for secure push..."
chmod 600 ~/.ssh/id_blankcanvas
ssh-add -D
git remote set-url origin git@github.com:muzaale/canvas
ssh-add ~/.ssh/id_blankcanvas

# Optional: If you're using a remote, push changes to canvas
echo "Pushing changes to remote repository..."
git push

# 7. Clean up
cd ..
rm ../og/seasons.docx.patch
rm -rf og

echo "Migration completed successfully!"

870. gh.sh#

#!/bin/bash
set -e  # Stop on any error

# Variables
OG_REPO="https://github.com/afecdvi/og"
CANVAS_REPO="git@github.com:muzaale/canvas"
SSH_KEY="$HOME/.ssh/id_blankcanvas"
FILENAME="seasons.docx"
BRANCH_NAME="merge_branch"

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "git could not be found. Please install git."
    exit 1
fi

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
# mkdir -p workspace_for_merge && cd workspace_for_merge

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Clone the 'og' repository
echo "Cloning 'og' repository..."
rm -rf og
git clone $OG_REPO og

# Navigate to the cloned 'og' repository to fetch commits related to the desired file
echo "Fetching commits related to $FILENAME..."
cd og
commits_to_cherry_pick=$(git log --reverse --pretty=format:"%H" -- $FILENAME)

# Navigate back to the workspace and clone the 'canvas' repository if not already cloned
cd ..
rm -rf canvas
if [ ! -d "canvas" ]; then
  echo "Cloning 'canvas' repository..."
  git clone $CANVAS_REPO canvas
fi

# Navigate to the 'canvas' repository
cd canvas

# Ensure that we're on the right branch or create one
if git show-ref --verify --quiet refs/heads/$BRANCH_NAME; then
    git checkout $BRANCH_NAME
else
    git checkout -b $BRANCH_NAME
fi

# Cherry-pick commits related to the desired file into the 'canvas' repository
for commit in $commits_to_cherry_pick; do
    # Cherry-pick each commit
    git cherry-pick $commit

    # Check for conflicts specifically related to the FILENAME
    CONFLICTS=$(git diff --name-only --diff-filter=U | grep $FILENAME)

    # If there are conflicts in FILENAME
    if [ ! -z "$CONFLICTS" ]; then
        echo "Conflict detected in $FILENAME. Please resolve manually."
        exit 1
    fi
done

# Push the changes
echo "Pushing changes to the 'canvas' repository..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY
git remote set-url origin $CANVAS_REPO
ssh-add $SSH_KEY
git push origin $BRANCH_NAME

echo "Script executed successfully!"

871. gitlog#

(base) d@Poseidon workspace_for_merge % git log
commit e2ca8dc57bb1d35332ad87719e70fb21edec7c77 (HEAD -> merge_branch, main)
Author: jhustata <muzaale@jhmi.edu>
Date:   Fri Aug 4 00:54:16 2023 -0400

    seasons.docx

commit 2bdcaf21290f2a34d8aa7177088bbc52296308d2
Author: muzaale <muzaale@gmail.com>
Date:   Wed Aug 2 13:21:34 2023 -0400

    send this version to fawaz for review

commit 546f62634d35902e5a03d2a422829ff6d612e728
Author: muzaale <muzaale@gmail.com>
Date:   Sat Jul 15 11:55:29 2023 -0400

    vaughn j brathwaite zoom call

commit ac1397deac6cc7cdeca7a207ebe60bd682956846
Merge: 2fe97594 228a1c8b
Author: muzaale <muzaale@gmail.com>
Date:   Sat Jun 24 14:47:32 2023 -0400

    cdf of z = 1-sided p
(base) d@Poseidon workspace_for_merge % 

872. workflow7.0#

872.1. bc.sh#

pwd
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
# nano bc.sh
# chmod +x bc.sh
jb build blank 
git clone https://github.com/muzaale/canvas
cp -r blank/* canvas
cd canvas
git add ./*
git commit -m "iteration... 1"
ssh-keygen -t ed25519 -C "muzaale@gmail.com"
/users/d/.ssh/id_blankcanvas
y
blank
blank
cat /users/d/.ssh/id_blankcanvas.pub
eval "$(ssh-agent -s)"
pbcopy < ~/.ssh/id_blankcanvas.pub
chmod 600 ~/.ssh/id_blankcanvas
git remote -v
ssh-add -D
git remote set-url origin git@github.com:muzaale/canvas 
ssh-add ~/.ssh/id_blankcanvas
blank
git push 
ghp-import -n -p -f _build/html
cd ..
./gh.sh 

872.2. gh.sh#

#!/bin/bash
set -e  # Stop on any error

# Variables
OG_REPO="https://github.com/afecdvi/og"
CANVAS_REPO="git@github.com:muzaale/canvas"
SSH_KEY="$HOME/.ssh/id_blankcanvas"
FILENAME="seasons.docx"
BRANCH_NAME="merge_branch"

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "git could not be found. Please install git."
    exit 1
fi

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
# mkdir -p workspace_for_merge && cd workspace_for_merge

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Clone the 'og' repository
echo "Cloning 'og' repository..."
rm -rf og
git clone $OG_REPO og

# Navigate to the cloned 'og' repository to fetch commits related to the desired file
echo "Fetching commits related to $FILENAME..."
cd og
commits_to_cherry_pick=$(git log --reverse --pretty=format:"%H" -- $FILENAME)

# Navigate back to the workspace and clone the 'canvas' repository if not already cloned
cd ..
rm -rf canvas
if [ ! -d "canvas" ]; then
  echo "Cloning 'canvas' repository..."
  git clone $CANVAS_REPO canvas
fi

# Navigate to the 'canvas' repository
cd canvas

# Ensure that we're on the right branch or create one
if git show-ref --verify --quiet refs/heads/$BRANCH_NAME; then
    git checkout $BRANCH_NAME
else
    git checkout -b $BRANCH_NAME
fi

# Cherry-pick commits related to the desired file into the 'canvas' repository
for commit in $commits_to_cherry_pick; do
    # Cherry-pick each commit
    git cherry-pick $commit

    # Check for conflicts specifically related to the FILENAME
    CONFLICTS=$(git diff --name-only --diff-filter=U | grep $FILENAME)

    # If there are conflicts in FILENAME
    if [ ! -z "$CONFLICTS" ]; then
        echo "Conflict detected in $FILENAME. Please resolve manually."
        exit 1
    fi
done

# Push the changes
echo "Pushing changes to the 'canvas' repository..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY
git remote set-url origin $CANVAS_REPO
ssh-add $SSH_KEY
git push origin $BRANCH_NAME

echo "Script executed successfully!"

872.3. gitlog#

  • see above

  • make it pretty

  • with cherry-pick on top

872.4. gc.sh#

correct, but will seasons.docx in canvas have the commit history of seasons.docx in afecdvi/og? that’s my humble plea and goal


Understood. The answer is no, by using patches or the cherry-picking method described earlier, you will effectively be creating new commits in canvas that have the same changes as the original commits in og. These new commits will have a different commit hash and won’t directly preserve the original commit messages, dates, or authors.

However, if you want to literally transfer the entire commit history of seasons.docx from afecdvi/og to canvas, it’s a bit more involved. Here’s a rough outline of one method:

  1. Create a branch in og that contains only the history of seasons.docx.

  2. Turn that branch into its own standalone repository.

  3. Add the new standalone repository as a remote to canvas.

  4. Fetch and merge the branch from the standalone repository into canvas.

Here’s how you can do this:

  1. In og, filter out the history of seasons.docx:

cd og
git checkout --orphan temp_branch
git add seasons.docx
git commit -m "Init seasons.docx"
git rebase --onto temp_branch --root master
  1. Clean up and turn og into a standalone repo for seasons.docx:

git gc --prune=all
  1. Add the standalone repo as a remote to canvas and fetch:

cd ../canvas
git remote add og_temp ../og
git fetch og_temp
  1. Merge the history of seasons.docx into canvas:

git merge og_temp/master --allow-unrelated-histories

This will preserve the actual commits and their metadata, but it will effectively merge the histories of the two repositories. This is a non-trivial action and may cause issues if other people are also working on canvas. Ensure you understand the ramifications and maybe test on a separate branch or repository first.

If this is what you want, I can help further by creating a script. Let me know.


#!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO="https://github.com/afecdvi/og"
CANVAS_REPO="git@github.com:muzaale/canvas"
SSH_KEY="$HOME/.ssh/id_blankcanvas"
FILENAME="seasons.docx"
BRANCH_NAME="merge_seasons_docx"

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "git could not be found. Please install git."
    exit 1
fi

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Clone the 'og' repository
echo "Cloning 'og' repository..."
rm -rf og_temp
git clone $OG_REPO og_temp

# Navigate to the cloned 'og' repository and filter out the history of $FILENAME
echo "Filtering out the history of $FILENAME in 'og' repository..."
cd og_temp
git checkout --orphan temp_branch
git add $FILENAME
git commit -m "Init $FILENAME"
git rebase --onto temp_branch --root master
git gc --prune=all

# Navigate back to the working directory and clone the 'canvas' repository if not already cloned
cd ..
if [ ! -d "canvas" ]; then
  echo "Cloning 'canvas' repository..."
  git clone $CANVAS_REPO canvas
fi

# Navigate to the 'canvas' repository
cd canvas

# Add the filtered 'og' repo as a remote and fetch its commits
echo "Fetching the commits from 'og' repository..."
git remote add og_temp ../og_temp
git fetch og_temp

# Merge the commits into 'canvas' repo
echo "Merging the commits into 'canvas' repository..."
git checkout -b $BRANCH_NAME
git merge og_temp/temp_branch --allow-unrelated-histories

# Push the changes to the 'canvas' repository
echo "Pushing changes to the 'canvas' repository..."
git remote set-url origin $CANVAS_REPO
ssh-add $SSH_KEY
git push origin $BRANCH_NAME

echo "Script executed successfully!"

873. workflow7.3#

Here’s a revised script that consolidates the instructions and incorporates your feedback:

#!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO="https://github.com/afecdvi/og"
CANVAS_REPO="git@github.com:muzaale/canvas"
SSH_KEY="$HOME/.ssh/id_blankcanvas"
FILENAME="seasons.docx"
BRANCH_NAME="merge_seasons_docx"

# Ensure git is installedå
if ! command -v git &> /dev/null; then
    echo "git could not be found. Please install git."
    exit 1
fi

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Clone the 'og' repository
echo "Cloning 'og' repository..."
rm -rf og_temp
git clone $OG_REPO og_temp

# Navigate to the cloned 'og' repository and filter out the history of $FILENAME
echo "Filtering out the history of $FILENAME in 'og' repository..."
cd og_temp
git checkout --orphan temp_branch
git add $FILENAME
git commit -m "Init $FILENAME"
git rebase --onto temp_branch --root master
git branch -D master
git branch -m temp_branch master
git gc --prune=all

# Navigate back to the working directory 
cd ..

# Clone the 'canvas' repository if not already cloned
if [ ! -d "canvas" ]; then
  echo "Cloning 'canvas' repository..."
  git clone $CANVAS_REPO canvas
fi

# Navigate to the 'canvas' repository and set up for the merge
cd canvas

# Add the filtered 'og' repo as a remote and fetch its commits
echo "Fetching the commits from 'og' repository..."
git remote add og_temp ../og_temp
git fetch og_temp

# Ensure you're on the main branch of canvas before creating a new branch
git checkout main

# Create a new branch for the merge
git checkout -b $BRANCH_NAME

# Merge the commits from the 'og' repository
echo "Merging the commits into 'canvas' repository..."
git merge og_temp/master --allow-unrelated-histories

# Push the new branch to the 'canvas' repository
echo "Pushing changes to the 'canvas' repository..."
git push origin $BRANCH_NAME

echo "Script executed successfully!"

The script should now be able to:

  1. Setup SSH.

  2. Clone the ‘og’ repository.

  3. Filter out the history of the specified file (FILENAME) from ‘og’.

  4. Clone the ‘canvas’ repository (if not already cloned).

  5. Fetch the filtered history of ‘og’.

  6. Merge the filtered history into a new branch in ‘canvas’.

  7. Push the new branch to the ‘canvas’ remote.

Do make sure you have permissions to push to the ‘canvas’ repo and that the SSH key ($SSH_KEY) has been added to your GitHub account or the respective account hosting the repositories.

874. blankcanvas.sh#

  • literally what it is

  • but its header segues into a delightful place

  • and it’s a good place to be

875. database#

Design a database schema for an online merch store.


Certainly! Below is a basic database schema for an online merchandise store. This schema covers key features but can be expanded or adjusted based on specific requirements.

1. Tables:#

1. Users

  • UserID (Primary Key, Auto-increment)

  • Username

  • Email (Unique)

  • Password (hashed)

  • Firstname

  • Lastname

  • DateJoined

  • LastLoginDate

2. Products

  • ProductID (Primary Key, Auto-increment)

  • ProductName

  • Description

  • Price

  • StockQuantity

  • CategoryID (Foreign Key)

  • ImageURL

3. Categories

  • CategoryID (Primary Key, Auto-increment)

  • CategoryName

  • Description

4. Orders

  • OrderID (Primary Key, Auto-increment)

  • UserID (Foreign Key)

  • OrderDate

  • ShippingAddressID (Foreign Key)

  • OrderStatus

5. OrderDetails

  • OrderDetailID (Primary Key, Auto-increment)

  • OrderID (Foreign Key)

  • ProductID (Foreign Key)

  • QuantityOrdered

  • PriceEach

6. Addresses

  • AddressID (Primary Key, Auto-increment)

  • UserID (Foreign Key)

  • StreetAddress1

  • StreetAddress2

  • City

  • State

  • PostalCode

  • Country

  • IsDefault

7. Cart

  • CartID (Primary Key, Auto-increment)

  • UserID (Foreign Key)

8. CartItems

  • CartItemID (Primary Key, Auto-increment)

  • CartID (Foreign Key)

  • ProductID (Foreign Key)

  • Quantity

9. Reviews

  • ReviewID (Primary Key, Auto-increment)

  • UserID (Foreign Key)

  • ProductID (Foreign Key)

  • Rating (e.g., 1-5)

  • Comment

  • ReviewDate

2. Relationships:#

  • Users and Orders: One-to-Many

    • A user can place multiple orders, but each order is placed by one user.

  • Users and Addresses: One-to-Many

    • A user can have multiple shipping addresses, but each address belongs to one user.

  • Products and Categories: Many-to-One

    • A product belongs to one category, but each category can have multiple products.

  • Orders and OrderDetails: One-to-Many

    • An order can have multiple products (order details), but each order detail refers to one order.

  • Users and Reviews: One-to-Many

    • A user can leave multiple reviews, but each review is written by one user.

  • Products and Reviews: One-to-Many

    • A product can have multiple reviews, but each review pertains to one product.

  • Users and Cart: One-to-One

    • A user has one cart, and each cart belongs to one user.

  • Cart and CartItems: One-to-Many

    • A cart can have multiple cart items, but each cart item belongs to one cart.

3. Indexes:#

Indexes should be created on columns that are frequently searched or joined to enhance query performance. For example:

  • Email in Users

  • ProductName and CategoryID in Products

  • UserID in Orders, Addresses, Reviews, and Cart

This schema provides a foundational structure for an online merch store. Depending on the specific requirements and features needed (e.g., wish lists, promotional codes, affiliate tracking, etc.), additional tables and fields may be necessary.


bring this to life even by simulation if necessary: given me python code


Creating a real-world application with the above schema involves multiple layers including front-end, back-end, and database interactions. I’ll provide a simplified simulation in Python that focuses on the backend aspect, using SQLite as the database for ease of demonstration.

  1. Setting Up the Database:

import sqlite3

def setup_database():
    conn = sqlite3.connect("merch_store.db")
    cursor = conn.cursor()
    
    # Users table
    cursor.execute('''
    CREATE TABLE IF NOT EXISTS Users (
        UserID INTEGER PRIMARY KEY AUTOINCREMENT,
        Username TEXT NOT NULL,
        Email TEXT UNIQUE NOT NULL,
        Password TEXT NOT NULL,
        Firstname TEXT,
        Lastname TEXT,
        DateJoined DATE,
        LastLoginDate DATE
    )''')
    
    # Remaining tables ... (similarly create tables for Products, Categories, etc.)
    
    conn.commit()
    conn.close()
  1. Inserting Data (Using Users as an example):

def add_user(username, email, password, firstname, lastname):
    conn = sqlite3.connect("merch_store.db")
    cursor = conn.cursor()
    
    cursor.execute('''
    INSERT INTO Users (Username, Email, Password, Firstname, Lastname, DateJoined, LastLoginDate)
    VALUES (?, ?, ?, ?, ?, CURRENT_DATE, CURRENT_DATE)
    ''', (username, email, password, firstname, lastname))
    
    conn.commit()
    conn.close()
  1. Fetching Data (Again, using Users as an example):

def fetch_user(email):
    conn = sqlite3.connect("merch_store.db")
    cursor = conn.cursor()
    
    cursor.execute("SELECT * FROM Users WHERE Email=?", (email,))
    user = cursor.fetchone()
    
    conn.close()
    return user
  1. Main Program Flow:

def main():
    setup_database()
    
    # Adding a sample user
    add_user("john_doe", "john.doe@example.com", "securepassword", "John", "Doe")
    
    # Fetching and displaying the user
    user = fetch_user("john.doe@example.com")
    print(user)

if __name__ == "__main__":
    main()

This is a very basic setup. In a real-world application:

  • Passwords would be hashed (e.g., using bcrypt).

  • Error handling and input validation would be essential.

  • Interaction with other tables and relationships would be implemented.

  • A RESTful API, frontend, and more advanced backend features would be added.

  • ORM (Object Relational Mapping) tools like SQLAlchemy could be used to simplify database operations.

Always remember to be cautious with handling sensitive information, especially passwords. Always use industry-approved cryptographic methods for storage and transmission.

876. adversarial-learning#

User
brief schpiel on my gc.sh file: #!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO="https://github.com/afecdvi/og"
CANVAS_REPO="git@github.com:muzaale/canvas"
SSH_KEY="$HOME/.ssh/id_blankcanvas"
FILENAME="seasons.docx"
BRANCH_NAME="merge_seasons_docx"

# Ensure git is installedå
if ! command -v git &> /dev/null; then
    echo "git could not be found. Please install git."
    exit 1
fi

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Clone the 'og' repository
echo "Cloning 'og' repository..."
rm -rf og_temp
git clone $OG_REPO og_temp

# Navigate to the cloned 'og' repository and filter out the history of $FILENAME
echo "Filtering out the history of $FILENAME in 'og' repository..."
cd og_temp
git checkout --orphan temp_branch
git add $FILENAME
git commit -m "Init $FILENAME"
git rebase --onto temp_branch --root master
git branch -D master
git branch -m temp_branch master
git gc --prune=all

# Navigate back to the working directory 
cd ..

# Clone the 'canvas' repository if not already cloned
if [ ! -d "canvas" ]; then
  echo "Cloning 'canvas' repository..."
  git clone $CANVAS_REPO canvas
fi

# Navigate to the 'canvas' repository and set up for the merge
cd canvas

# Add the filtered 'og' repo as a remote and fetch its commits
echo "Fetching the commits from 'og' repository..."
git remote add og_temp ../og_temp
git fetch og_temp

# Ensure you're on the main branch of canvas before creating a new branch
git checkout main

# Create a new branch for the merge
git checkout -b $BRANCH_NAME

# Merge the commits from the 'og' repository
echo "Merging the commits into 'canvas' repository..."
git merge og_temp/master --allow-unrelated-histories

# Push the new branch to the 'canvas' repository
echo "Pushing changes to the 'canvas' repository..."
git push origin $BRANCH_NAME

echo "Script executed successfully!"

The provided gc.sh script seems to be designed to merge the history of a specific file (seasons.docx) from one Git repository (og) into another Git repository (canvas). Below is a breakdown of what the script does:

  1. Stop on Any Error: Using set -e, the script will halt if any command fails.

  2. Variables: There are several predefined variables that configure the script:

    • OG_REPO: The original repository’s URL.

    • CANVAS_REPO: The destination repository’s URL.

    • SSH_KEY: The SSH key location for Git authentication.

    • FILENAME: The file name to filter from the original repo.

    • BRANCH_NAME: The name of the new branch where the file’s history will be merged into the canvas repository.

  3. Git Installation Check: Checks if Git is installed on the machine. If not, it throws an error.

  4. Setting up SSH: Sets up SSH by starting the agent, making sure the provided SSH key is only readable by the owner (for security), and adding the key to the agent.

  5. Navigation: The script then navigates to a specified working directory.

  6. Clone and Filter og Repository:

    • Clones the original repository into a temporary directory (og_temp).

    • Then, it navigates into this temporary directory and uses a combination of Git commands to isolate the history of seasons.docx while getting rid of the other files and their histories.

  7. Clone canvas Repository: If the canvas repository hasn’t been cloned yet, it will clone it.

  8. Set Up for Merge in canvas Repository:

    • Navigates to the canvas repo directory.

    • Adds the filtered og_temp repo as a remote source.

    • Fetches its commits.

    • Checks out the main branch and creates a new branch (merge_seasons_docx).

  9. Merge: The script merges the isolated seasons.docx history from og_temp repository into the new branch of canvas repository.

  10. Push: Pushes the changes of the new branch to the remote canvas repository.

  11. Completion: A message “Script executed successfully!” is displayed once all the commands are executed without error.

Takeaways:#

  • Purpose: The script’s primary purpose is to transplant the history of a specific file from one repository into another repository, maintaining its commit history.

  • Usability: The script is well-structured with echo statements which guide the user through its steps, making it user-friendly.

  • Security: The script uses SSH for Git operations. Make sure that the provided SSH key has the right permissions on GitHub or whichever platform hosts the repositories.

  • Precautions: The script makes several changes that are hard or impossible to revert, like force cloning over directories and altering Git history. It’s always a good practice to backup your repositories before running such scripts.


how would you improve gc.sh? my work directory:

(base) d@Poseidon 1.ontology % pwd
/Users/d/dropbox/1f.ἡἔρις,κ/1.ontology
(base) d@Poseidon 1.ontology % ls -l
total 24
-rw-r--r--@   1 d  staff   601 Aug  4 08:06 bc.sh
drwxr-xr-x@  21 d  staff   672 Aug  4 00:38 blank
drwxr-xr-x@ 276 d  staff  8832 Aug  3 15:54 bloc
drwxr-xr-x@  22 d  staff   704 Aug  4 08:16 canvas
drwxr-xr-x@ 280 d  staff  8960 Aug  3 18:46 denotas
drwxr-xr-x@  15 d  staff   480 Aug  1 14:43 fenagas
-rwxr-xr-x@   1 d  staff  1952 Aug  4 08:41 gc.sh
-rwxr-xr-x@   1 d  staff  1952 Aug  4 08:41 gh.sh
drwxr-xr-x@  29 d  staff   928 Jul 20 20:26 libro
drwxr-xr-x@ 144 d  staff  4608 Jun 23 23:20 livre
drwxr-xr-x@  14 d  staff   448 Aug  4 09:47 llc
drwxr-xr-x@  20 d  staff   640 Aug  2 13:18 mb
drwxr-xr-x@  22 d  staff   704 Aug  4 08:16 og
drwxr-xr-x@  15 d  staff   480 Jul 31 01:05 repos
drwxr-xr-x@  18 d  staff   576 Jul 18 10:57 spring
drwxr-xr-x@ 139 d  staff  4448 Jun 25 08:29 summer
drwxr-xr-x@  14 d  staff   448 Jul 31 06:24 track
drwxr-xr-x@  25 d  staff   800 Jul 20 20:21 verano
(base) d@Poseidon 1.ontology % 



Given the structure of the current gc.sh script and the layout of your working directory, here are some improvements you can consider:

  1. Parameterization: Instead of hardcoding values in the script, accept them as arguments. This makes your script more versatile. For instance, if you need to merge a different file or branch in the future, you can easily do that without modifying the script.

  2. Error Handling: Improve error handling. For instance, ensure directories exist before navigating into them and handle the case where they don’t.

  3. Consistent Directory Naming: Instead of hardcoding directory names like “canvas” and “og_temp”, consider using consistent naming or even generating temporary directory names to avoid any clashes or unintentional overwrites.

  4. Check Existing Remotes: Before adding a new remote, check if it already exists to avoid errors.

  5. Cleanup: At the end of your script, consider cleaning up any temporary changes, like removing temporary directories or remotes.

  6. Logging: Introduce logging so you can troubleshoot or verify actions later on.

  7. Comments: While your script is quite self-explanatory, especially with the echo commands, adding comments for complex operations can aid future readers or yourself when revisiting the script.

  8. Avoid Hard Exit: Instead of directly exiting the script when a certain condition isn’t met, consider printing a warning and then proceed with the rest of the script or provide alternatives.

Here’s a revised version of the script with some of these suggestions implemented:

#!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO=${1:-"https://github.com/afecdvi/og"}
CANVAS_REPO=${2:-"git@github.com:muzaale/canvas"}
SSH_KEY=${3:-"$HOME/.ssh/id_blankcanvas"}
FILENAME=${4:-"seasons.docx"}
BRANCH_NAME=${5:-"merge_seasons_docx"}

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "WARNING: git could not be found. Please install git."
else
    # Set up SSH
    echo "Setting up SSH..."
    eval "$(ssh-agent -s)"
    chmod 600 $SSH_KEY
    ssh-add -D
    ssh-add $SSH_KEY

    # Navigate to the working directory
    cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

    # Clone the 'og' repository
    echo "Cloning 'og' repository..."
    TEMP_DIR="og_temp_$(date +%s)"
    rm -rf $TEMP_DIR
    git clone $OG_REPO $TEMP_DIR

    # Filter out the history of $FILENAME
    cd $TEMP_DIR
    git checkout --orphan temp_branch
    git add $FILENAME
    git commit -m "Init $FILENAME"
    git rebase --onto temp_branch --root master
    git branch -D master
    git branch -m temp_branch master
    git gc --prune=all

    # Navigate back to the working directory 
    cd ..

    # Clone the 'canvas' repository if not already cloned
    if [ ! -d "canvas" ]; then
        echo "Cloning 'canvas' repository..."
        git clone $CANVAS_REPO canvas
    fi

    cd canvas

    # Add the filtered 'og' repo as a remote and fetch its commits
    if ! git remote | grep -q "og_temp"; then
        git remote add og_temp ../$TEMP_DIR
    fi
    git fetch og_temp

    git checkout main
    git checkout -b $BRANCH_NAME
    git merge og_temp/master --allow-unrelated-histories

    git push origin $BRANCH_NAME

    # Cleanup
    git remote remove og_temp
    cd ..
    rm -rf $TEMP_DIR
fi

echo "Script executed successfully!"

Remember, the changes above are illustrative. Your actual requirements and constraints will dictate which changes make sense and which don’t. Always back up your data before running modified scripts or other operations.

877. fatal-error#

It looks like you’re trying to run some Git operations and ran into the error: fatal: no such branch/commit 'master'. This is likely because the default branch is not named master.

Here are some steps to troubleshoot and resolve the issue:

  1. Check Current Branches: First, let’s see the list of branches you have. Run:

    git branch -a
    

    This command will list all the branches in your local repo as well as remote-tracking branches.

  2. Default Branch Name: Git repositories created after October 1, 2020, on platforms like GitHub use main as the default branch name instead of master. If you see main in your branch list and no master, this is likely the issue. Replace any reference to master in your scripts or commands with main.

  3. Reference the Correct Branch: If neither master nor main are listed in your branches, identify the correct default branch from the output of the git branch -a command and use that instead.

  4. Update Your Script: If you’re using a script similar to the gc.sh you shared earlier, find the line where master is referenced (like in the git merge og_temp/master --allow-unrelated-histories command) and replace it with the correct branch name.

  5. Consider Renaming: If you prefer to stick with the master naming convention (or any other name), and if you have the necessary permissions, you can rename the default branch:

    git branch -m old_branch_name new_branch_name
    

    Replace old_branch_name with the current default branch name (like main) and new_branch_name with the desired name (like master).

  6. Sync with Remote: If you renamed the branch, don’t forget to push the changes to the remote repository and set the upstream branch:

    git push -u origin new_branch_name
    

Make sure to update any scripts, CI/CD configurations, or documentation that refers to the old branch name after renaming.

878. onemoreiteration#

Certainly, I can see you’re trying to simplify and get back to the version of gc.sh that had worked well for you previously, with just the necessary changes to accommodate the “Not a valid object name HEAD” error.

The main issue seems to be the occurrence of merge conflicts due to your current working branch having changes (or a different history) than what you’re trying to merge in. Since this script automates a significant set of operations, when something goes wrong in between, the outcome can be tricky.

Let’s update the original gc.sh to address the aforementioned error and potential merge conflicts:

#!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO=${1:-"https://github.com/afecdvi/og"}
CANVAS_REPO=${2:-"git@github.com:muzaale/canvas"}
SSH_KEY=${3:-"$HOME/.ssh/id_blankcanvas"}
FILENAME=${4:-"seasons.docx"}
BRANCH_NAME=${5:-"merge_seasons_docx"}

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "WARNING: git could not be found. Please install git."
    exit 1
fi

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Clone the 'og' repository and filter its history
echo "Cloning 'og' repository and filtering history for $FILENAME..."
TEMP_DIR="og_temp_$(date +%s)"
rm -rf $TEMP_DIR
git clone $OG_REPO $TEMP_DIR
cd $TEMP_DIR
git filter-branch --prune-empty --index-filter "
    git rm --cached --ignore-unmatch *;
    if [ -f '$FILENAME' ]; then
        git add $FILENAME;
    fi
" main

# Return to main directory
cd ../

# Check if the 'canvas' repository is present and if not, clone it
echo "Processing 'canvas' repository..."
if [ ! -d "canvas" ]; then
    git clone $CANVAS_REPO canvas
fi

cd canvas

# Recovery steps if HEAD is disconnected
if ! git symbolic-ref HEAD &> /dev/null; then
    echo "Reconnecting HEAD..."
    echo ref: refs/heads/main > .git/HEAD
fi

# Add the filtered 'og' repo as a remote and fetch its commits
if ! git remote | grep -q "og_temp"; then
    git remote add og_temp ../$TEMP_DIR
fi
git fetch og_temp

# Merge the filtered branch into a new branch
git checkout -b $BRANCH_NAME
git merge og_temp/main --allow-unrelated-histories

# If merge conflict occurs
if [ "$(git ls-files -u | wc -l)" -ne 0 ]; then
    echo "ERROR: Merge conflict detected. Please resolve manually."
    exit 1
fi

git push origin $BRANCH_NAME

# Cleanup
git remote remove og_temp
cd ..
rm -rf $TEMP_DIR

echo "Script executed successfully!"

Changes made:

  1. Reintroduced the check for disconnected HEAD and reconnected it if needed.

  2. Introduced a simple check for merge conflicts right after the merge. If conflicts occur, the script stops and advises you to solve them manually.

This script should be more stable, especially in your current working context. However, given the complexity of operations and potential variances in the state of repositories, you might still run into challenges. Always backup your work and run the script in a test environment if possible.

879. alpha-beta#

  1. cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

  2. track repos

  3. edit track

  4. commit in ./track/trackrepos.sh

  5. vigilante

  6. jhutrc/beta

  7. jhutrc/beta

  8. ssh-keygen -t ed25519 -C “muzaale@icloud.com

  9. /users/d/.ssh/id_alphabeta

  10. alpha

  11. alpha

  12. cat /users/d/.ssh/id_alphabeta.pub

  13. eval “$(ssh-agent -s)”

  14. pbcopy < ~/.ssh/id_alphabeta.pub

  15. chmod 600 ~/.ssh/id_alphabeta

  16. git remote -v

  17. ssh-add -D

  18. git remote set-url origin git@github.com:jhutrc/beta

  19. ssh-add ~/.ssh/id_alphabeta

  20. alpha

  21. https://muzaale.github.io/denotas/bdn202308.html#yesterday

  22. mkdir -p alpha

  23. nano alpha/_toc.yml

  24. chatgpt

  25. nano ./alpha.sh

  26. chmod +x alpha.sh

  27. ./alpha.sh

  28. nano alpha/_config.yml

  29. alpha/hub_and_spoke.jpg

  30. nano alpha/alphabeta.sh

  31. chmod +x alpha/alphabeta.sh

  32. cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

  33. chmod +x alpha/alphabeta.sh

  34. cd alpha

  35. vscode > file > new file > alpha/alpha.ipynb

  36. find . -name “*.ipynb” -exec cp “alpha.ipynb” {} ;

  37. cd ..

  38. cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

  39. git clone jhutrc/beta

  40. jb build alpha

  41. cp -r alpha/* beta

  42. cd beta

  43. git add ./*

  44. git commit -m “third jb created manually”

  45. chmod 600 ~/.ssh/id_alphabeta

  46. ssh-add -D

  47. git remote set-url origin git@github.com:jhutrc/beta

  48. ssh-add ~/.ssh/id_alphabeta

  49. alpha

  50. git push

  51. ghp-import -n -p -f _build/html

880. directory#

   8d4a34c..465ee23  gh-pages -> gh-pages
(base) d@Poseidon 1.ontology % ls -l alpha
total 120
drwxr-xr-x@  5 d  staff    160 Aug  4 13:55 Act I
drwxr-xr-x@  6 d  staff    192 Aug  4 13:55 Act II
drwxr-xr-x@  7 d  staff    224 Aug  4 13:55 Act III
drwxr-xr-x@  8 d  staff    256 Aug  4 13:55 Act IV
drwxr-xr-x@  8 d  staff    256 Aug  4 13:55 Act V
drwxr-xr-x@  4 d  staff    128 Aug  4 13:55 Courses
drwxr-xr-x@ 10 d  staff    320 Aug  4 13:55 Epilogue
drwxr-xr-x@  5 d  staff    160 Aug  4 13:55 Git & Spoke
drwxr-xr-x@  5 d  staff    160 Aug  4 14:18 _build
-rw-r--r--@  1 d  staff    950 Aug  4 14:05 _config.yml
-rw-r--r--@  1 d  staff   5429 Aug  4 13:40 _toc.yml
-rw-r--r--@  1 d  staff    228 Aug  4 14:11 alpha.ipynb
-rwxr-xr-x@  1 d  staff  11723 Aug  4 13:55 alpha.sh
-rwxr-xr-x@  1 d  staff    308 Aug  4 14:27 alphabeta.sh
drwxr-xr-x@ 12 d  staff    384 Aug  4 13:55 dramatispersonae
-rw-r--r--@  1 d  staff  17905 Aug  3 18:03 hub_and_spoke.jpg
-rw-r--r--@  1 d  staff    228 Aug  4 14:13 intro.ipynb
-rw-r--r--@  1 d  staff    228 Aug  4 14:13 prologue.ipynb
(base) d@Poseidon 1.ontology % 

881. intro#

  1. populate directories

  2. alpha/alphabeta.sh

  3. fix image paths

  4. .ipynb per dpersonae

  5. _toc.yml bugs


#!/bin/bash

# Change the working directory to the desired location
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Create the "alpha" directory
# mkdir -p alpha
# nano alpha/_toc.yml

# Create the "Root" folder and the "intro.ipynb" file inside it
touch "alpha/intro.ipynb"

# Create the "prologue.ipynb" file in the "alpha" directory
touch "alpha/prologue.ipynb"

# Create "Act I" folder and its subfiles
mkdir -p "alpha/Act I"
touch "alpha/Act I/act1_1.ipynb"
touch "alpha/Act I/act1_2.ipynb"
touch "alpha/Act I/act1_3.ipynb"

# Create "Act II" folder and its subfiles
mkdir -p "alpha/Act II"
touch "alpha/Act II/act2_1.ipynb"
touch "alpha/Act II/act2_2.ipynb"
touch "alpha/Act II/act2_3.ipynb"
touch "alpha/Act II/act2_4.ipynb"

# Create "Act III" folder and its subfiles
mkdir -p "alpha/Act III"
touch "alpha/Act III/act3_1.ipynb"
touch "alpha/Act III/act3_2.ipynb"
touch "alpha/Act III/act3_3.ipynb"
touch "alpha/Act III/act3_4.ipynb"
touch "alpha/Act III/act3_5.ipynb"

# Create "Act IV" folder and its subfiles
mkdir -p "alpha/Act IV"
touch "alpha/Act IV/act4_1.ipynb"
touch "alpha/Act IV/act4_2.ipynb"
touch "alpha/Act IV/act4_3.ipynb"
touch "alpha/Act IV/act4_4.ipynb"
touch "alpha/Act IV/act4_5.ipynb"
touch "alpha/Act IV/act4_6.ipynb"

# Create "Act V" folder and its subfiles
mkdir -p "alpha/Act V"
touch "alpha/Act V/act5_1.ipynb"
touch "alpha/Act V/act5_2.ipynb"
touch "alpha/Act V/act5_3.ipynb"
touch "alpha/Act V/act5_4.ipynb"
touch "alpha/Act V/act5_5.ipynb"
touch "alpha/Act V/act5_6.ipynb"

# Create "Epilogue" folder and its subfiles
mkdir -p "alpha/Epilogue"
touch "alpha/Epilogue/epi_1.ipynb"
touch "alpha/Epilogue/epi_2.ipynb"
touch "alpha/Epilogue/epi_3.ipynb"
touch "alpha/Epilogue/epi_4.ipynb"
touch "alpha/Epilogue/epi_5.ipynb"
touch "alpha/Epilogue/epi_6.ipynb"
touch "alpha/Epilogue/epi_7.ipynb"
touch "alpha/Epilogue/epi_8.ipynb"

# Create "Git & Spoke" folder and its subfiles
mkdir -p "alpha/Git & Spoke"
touch "alpha/Git & Spoke/gas_1.ipynb"
touch "alpha/Git & Spoke/gas_2.ipynb"
touch "alpha/Git & Spoke/gas_3.ipynb"

# Create "Courses" folder and its subfiles
mkdir -p "alpha/Courses"
touch "alpha/Courses/course1.ipynb"
touch "alpha/Courses/course2.ipynb"

# Create "dramatis_personae" folder and its subdirectories
mkdir -p "alpha/dramatis_personae/high_school_students"
mkdir -p "alpha/dramatis_personae/under_grads"
mkdir -p "alpha/dramatis_personae/grad_students”
mkdir -p "alpha/dramatis_personae/graduates"
mkdir -p "alpha/dramatis_personae/medical_students"
mkdir -p "alpha/dramatis_personae/residents"
mkdir -p "alpha/dramatis_personae/fellows"
mkdir -p "alpha/dramatis_personae/faculty"
mkdir -p "alpha/dramatis_personae/analysts"
mkdir -p "alpha/dramatis_personae/staff"
mkdir -p "alpha/dramatis_personae/collaborators"

# ... (rest of the script follows the same pattern) ...

# Create "dramatis_personae" subdirectories with suffixes _1 to _5
for branch in high_school_students under_grads graduates medical_students residents fellows faculty analysts staff collaborators; do
    for ((i=1; i<=5; i++)); do
        mkdir -p "alpha/dramatis_personae/${branch}/${branch}_${i}"
    done
done

# Create additional .ipynb files inside specific subdirectories
touch "alpha/dramatis_personae/high_school_students/high_school_students.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students.ipynb"
touch "alpha/dramatis_personae/graduates/graduates.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students.ipynb"
touch "alpha/dramatis_personae/residents/residents.ipynb"
touch "alpha/dramatis_personae/fellows/fellows.ipynb"
touch "alpha/dramatis_personae/faculty/faculty.ipynb"
touch "alpha/dramatis_personae/analysts/analysts.ipynb"
touch "alpha/dramatis_personae/staff/staff.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators.ipynb"
touch "alpha/dramatis_personae/high_school_students/high_school_students_1.ipynb"
touch "alpha/dramatis_personae/high_school_students/high_school_students_2.ipynb"
touch "alpha/dramatis_personae/high_school_students/high_school_students_3.ipynb"
touch "alpha/dramatis_personae/high_school_students/high_school_students_4.ipynb"
touch "alpha/dramatis_personae/high_school_students/high_school_students_5.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads_1.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads_2.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads_3.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads_4.ipynb"
touch "alpha/dramatis_personae/under_grads/under_grads_5.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students_1.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students_2.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students_3.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students_4.ipynb"
touch "alpha/dramatis_personae/grad_students/grad_students_5.ipynb"
touch "alpha/dramatis_personae/graduates/graduates_1.ipynb"
touch "alpha/dramatis_personae/graduates/graduates_2.ipynb"
touch "alpha/dramatis_personae/graduates/graduates_3.ipynb"
touch "alpha/dramatis_personae/graduates/graduates_4.ipynb"
touch "alpha/dramatis_personae/graduates/graduates_5.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students_1.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students_2.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students_3.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students_4.ipynb"
touch "alpha/dramatis_personae/medical_students/medical_students_5.ipynb"
touch "alpha/dramatis_personae/residents/residents_1.ipynb"
touch "alpha/dramatis_personae/residents/residents_2.ipynb"
touch "alpha/dramatis_personae/residents/residents_3.ipynb"
touch "alpha/dramatis_personae/residents/residents_4.ipynb"
touch "alpha/dramatis_personae/residents/residents_5.ipynb"
touch "alpha/dramatis_personae/fellows/fellows_1.ipynb"
touch "alpha/dramatis_personae/fellows/fellows_2.ipynb"
touch "alpha/dramatis_personae/fellows/fellows_3.ipynb"
touch "alpha/dramatis_personae/fellows/fellows_4.ipynb"
touch "alpha/dramatis_personae/fellows/fellows_5.ipynb"
touch "alpha/dramatis_personae/faculty/faculty_1.ipynb"
touch "alpha/dramatis_personae/faculty/faculty_2.ipynb"
touch "alpha/dramatis_personae/faculty/faculty_3.ipynb"
touch "alpha/dramatis_personae/faculty/faculty_4.ipynb"
touch "alpha/dramatis_personae/faculty/faculty_5.ipynb"
touch "alpha/dramatis_personae/analysts/analysts_1.ipynb"
touch "alpha/dramatis_personae/analysts/analysts_2.ipynb"
touch "alpha/dramatis_personae/analysts/analysts_3.ipynb"
touch "alpha/dramatis_personae/analysts/analysts_4.ipynb"
touch "alpha/dramatis_personae/analysts/analysts_5.ipynb"
touch "alpha/dramatis_personae/staff/staff_1.ipynb"
touch "alpha/dramatis_personae/staff/staff_2.ipynb"
touch "alpha/dramatis_personae/staff/staff_3.ipynb"
touch "alpha/dramatis_personae/staff/staff_4.ipynb"
touch "alpha/dramatis_personae/staff/staff_5.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators_1.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators_2.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators_3.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators_4.ipynb"
touch "alpha/dramatis_personae/collaborators/collaborators_5.ipynb"

# Display the directory tree
echo "Directory Structure:"
echo "-------------------"
echo "alpha/
├── intro.ipynb
├── prologue.ipynb
├── Act I/
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── dramatis_personae/
    ├── high_school_students/
       ├── high_school_students_1/
          └── ...
       ├── high_school_students_2/
          └── ...
       ├── high_school_students_3/
          └── ...
       ├── high_school_students_4/
          └── ...
       └── high_school_students_5/
           └── ...
    ├── under_grads/
       ├── under_grads_1/
          └── ...
       ├── under_grads_2/
          └── ...
       ├── under_grads_3/
          └── ...
       ├── under_grads_4/
          └── ...
       └── under_grads_5/
           └── ...
    ├── grad_students/
       ├── grad_students_1/
          └── ...
       ├── grad_students_2/
          └── ...
       ├── grad_students_3/
          └── ...
       ├── grad_students_4/
          └── ...
       └── grad_students_5/
           └── ...
    ├── graduates/
       ├── graduates_1/
          └── ...
       ├── graduates_2/
          └── ...
       ├── graduates_3/
          └── ...
       ├── graduates_4/
          └── ...
       └── graduates_5/
           └── ...
    ├── medical_students/
       ├── medical_students_1/
          └── ...
       ├── medical_students_2/
          └── ...
       ├── medical_students_3/
          └── ...
       ├── medical_students_4/
          └── ...
       └── medical_students_5/
           └── ...
    ├── residents/
       ├── residents_1/
          └── ...
       ├── residents_2/
          └── ...
       ├── residents_3/
          └── ...
       ├── residents_4/
          └── ...
       └── residents_5/
           └── ...
    ├── fellows/
       ├── fellows_1/
          └── ...
       ├── fellows_2/
          └── ...
       ├── fellows_3/
          └── ...
       ├── fellows_4/
          └── ...
       └── fellows_5/
           └── ...
    ├── faculty/
       ├── faculty_1/
          └── ...
       ├── faculty_2/
          └── ...
       ├── faculty_3/
          └── ...
       ├── faculty_4/
          └── ...
       └── faculty_5/
           └── ...
    ├── analysts/
       ├── analysts_1/
          └── ...
       ├── analysts_2/
          └── ...
       ├── analysts_3/
          └── ...
       ├── analysts_4/
          └── ...
       └── analysts_5/
           └── ...
    ├── staff/
       ├── staff_1/
          └── ...
       ├── staff_2/
          └── ...
       ├── staff_3/
          └── ...
       ├── staff_4/
          └── ...
       └── staff_5/
           └── ...
    └── collaborators/
        ├── collaborators_1/
           └── ...
        ├── collaborators_2/
           └── ...
        ├── collaborators_3/
           └── ...
        ├── collaborators_4/
           └── ...
        └── collaborators_5/
            └── ..."
echo "Folder structure has been created successfully."
mv alpha.sh alpha/alpha.sh

format: jb-book
root: intro.ipynb
title: Play

parts:
- caption: 
  chapters:
  - file: prologue.ipynb

- caption: Act I
  chapters:
  - file: Act I/act1_1.ipynb
  - file: Act I/act1_2.ipynb
  - file: Act I/act1_3.ipynb

- caption: Act II
  chapters:
  - file: Act II/act2_1.ipynb
  - file: Act II/act2_2.ipynb
  - file: Act II/act2_3.ipynb
  - file: Act II/act2_4.ipynb

- caption: Act III
  chapters:
  - file: Act III/act3_1.ipynb
  - file: Act III/act3_2.ipynb
  - file: Act III/act3_3.ipynb
  - file: Act III/act3_4.ipynb
  - file: Act III/act3_5.ipynb

- caption: Act IV
  chapters:
  - file: Act IV/act4_1.ipynb
  - file: Act IV/act4_2.ipynb
  - file: Act IV/act4_3.ipynb
  - file: Act IV/act4_4.ipynb
  - file: Act IV/act4_5.ipynb
  - file: Act IV/act4_6.ipynb

- caption: Act V
  chapters:
  - file: Act V/act5_1.ipynb
  - file: Act V/act5_2.ipynb
  - file: Act V/act5_3.ipynb
  - file: Act V/act5_4.ipynb
  - file: Act V/act5_5.ipynb
  - file: Act V/act5_6.ipynb

- caption: Epilogue
  chapters:
  - file: Epilogue/epi_1.ipynb
  - file: Epilogue/epi_2.ipynb
  - file: Epilogue/epi_3.ipynb
  - file: Epilogue/epi_4.ipynb
  - file: Epilogue/epi_5.ipynb
  - file: Epilogue/epi_6.ipynb
  - file: Epilogue/epi_7.ipynb
  - file: Epilogue/epi_8.ipynb

- caption: Gas & Spoke
  chapters:
  - file: Gas & Spoke/gas_1.ipynb
  - file: Gas & Spoke/gas_2.ipynb
  - file: Gas & Spoke/gas_3.ipynb

- caption: Courses
  chapters: 
  - url: https://publichealth.jhu.edu/courses
    title: Stata Programming 
  - file: dramatis_personae/high_school_students/high_school_students.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_1/high_school_students_1.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_1/high_school_students_1_1.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_2.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_3.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_4.ipynb
  - file: dramatis_personae/high_school_students/high_school_students_5.ipynb
  - file: dramatis_personae/under_grads/under_grads.ipynb
  - file: dramatis_personae/under_grads/under_grads_1.ipynb
  - file: dramatis_personae/under_grads/under_grads_2.ipynb
  - file: dramatis_personae/under_grads/under_grads_3.ipynb
  - file: dramatis_personae/under_grads/under_grads_4.ipynb
  - file: dramatis_personae/under_grads/under_grads_5.ipynb
  - file: dramatis_personae/grad_students/grad_students.ipynb
  - file: dramatis_personae/grad_students_1/grad_students_1.ipynb
  - file: dramatis_personae/grad_students_2/grad_students_2.ipynb
  - file: dramatis_personae/grad_students_3/grad_students_3.ipynb
  - file: dramatis_personae/grad_students_4/grad_students_4.ipynb
  - file: dramatis_personae/grad_students_5/grad_students_5.ipynb
  - file: dramatis_personae/medical_students/medical_students.ipynb
  - file: dramatis_personae/medical_students/medical_students_1/medical_students_1.ipynb
  - file: dramatis_personae/medical_students/medical_students_1/medical_students_1_1.ipynb
  - file: dramatis_personae/medical_students/medical_students_1/medical_students_1_2.ipynb
  - file: dramatis_personae/medical_students/medical_students_2.ipynb
  - file: dramatis_personae/medical_students/medical_students_3.ipynb
  - file: dramatis_personae/medical_students/medical_students_4.ipynb
  - file: dramatis_personae/medical_students/medical_students_5.ipynb
  - file: dramatis_personae/residents/residents.ipynb
  - file: dramatis_personae/residents/residents_1.ipynb
  - file: dramatis_personae/residents/residents_2.ipynb
  - file: dramatis_personae/residents/residents_3.ipynb
  - file: dramatis_personae/residents/residents_4.ipynb
  - file: dramatis_personae/residents/residents_5.ipynb
  - file: dramatis_personae/fellows/fellows.ipynb
  - file: dramatis_personae/fellows/fellows_1.ipynb
  - file: dramatis_personae/fellows/fellows_2.ipynb
  - file: dramatis_personae/fellows/fellows_3.ipynb
  - file: dramatis_personae/fellows/fellows_4.ipynb
  - file: dramatis_personae/fellows/fellows_5.ipynb
  - file: dramatis_personae/faculty/faculty.ipynb
  - file: dramatis_personae/faculty/faculty_1/faculty_1.ipynb
  - file: dramatis_personae/faculty/faculty_2/faculty_2.ipynb
  - file: dramatis_personae/faculty/faculty_3/faculty_3.ipynb
  - file: dramatis_personae/faculty/faculty_4/faculty_4.ipynb
  - file: dramatis_personae/faculty/faculty_5/faculty_5.ipynb
  - file: dramatis_personae/faculty/faculty_6/faculty_6.ipynb
  - file: dramatis_personae/faculty/faculty_7/faculty_7.ipynb
  - file: dramatis_personae/faculty/faculty_8/faculty_8.ipynb
  - file: dramatis_personae/faculty/faculty_9/faculty_9.ipynb
  - file: dramatis_personae/faculty/faculty_9/faculty_9_1.ipynb
  - file: dramatis_personae/analysts/analysts.ipynb
  - file: dramatis_personae/analysts/analysts_1.ipynb
  - file: dramatis_personae/analysts/analysts_2.ipynb
  - file: dramatis_personae/analysts/analysts_3.ipynb
  - file: dramatis_personae/analysts/analysts_4.ipynb
  - file: dramatis_personae/analysts/analysts_5.ipynb
  - file: dramatis_personae/staff/staff.ipynb
  - file: dramatis_personae/staff/staff_1.ipynb
  - file: dramatis_personae/staff/staff_2.ipynb
  - file: dramatis_personae/staff/staff_3.ipynb
  - file: dramatis_personae/staff/staff_4.ipynb
  - file: dramatis_personae/staff/staff_5.ipynb
  - file: dramatis_personae/collaborators/collaborators.ipynb
  - file: dramatis_personae/collaborators/collaborators_1/collaborators_1.ipynb
  - file: dramatis_personae/collaborators/collaborators_1/collaborators_1_1.ipynb
  - file: dramatis_personae/collaborators/collaborators_1/collaborators_1_2.ipynb
  - file: dramatis_personae/collaborators/collaborators_2.ipynb
  - file: dramatis_personae/collaborators/collaborators_3.ipynb
  - file: dramatis_personae/collaborators/collaborators_4.ipynb
  - file: dramatis_personae/collaborators/collaborators_5.ipynb
  - file: dramatis_personae/graduates/graduates.ipynb
  - file: dramatis_personae/graduates/graduates_1.ipynb
  - file: dramatis_personae/graduates/graduates_2.ipynb
  - file: dramatis_personae/graduates/graduates_3.ipynb
  - file: dramatis_personae/graduates/graduates_4.ipynb
  - file: dramatis_personae/graduates/graduates_5.ipynb

[High School Students](./dramatis_personae/high_school_students/high_school_students.ipynb)          
[Undergraduates](./dramatis_personae/under_grads/under_grads.ipynb)    
[Graduate Students](./dramatis_personae/grad_students/grad_students.ipynb)      
[Medical Students](./dramatis_personae/medical_students/medical_students.ipynb)   
[Residents](./dramatis_personae/residents/residents.ipynb)     
[Fellows](./dramatis_personae/fellows/fellows.ipynb)    
[Faculty](./dramatis_personae/faculty/faculty.ipynb)    
[Analysts](./dramatis_personae/analysts/analysts.ipynb)    
[Staff](./dramatis_personae/staff/staff.ipynb)    
[Collaborators](./dramatis_personae/collaborators/collaborators.ipynb)    
[Graduates](./dramatis_personae/graduates/graduates.ipynb) 

cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
jb build alpha
cp -r alpha/* beta
cd beta
git add ./*
git commit -m "evolution of COVID-19 variants"
chmod 600 ~/.ssh/id_alphabeta
ssh-add -D
git remote set-url origin git@github.com:jhutrc/beta
ssh-add ~/.ssh/id_alphabeta
alpha
git push
ghp-import -n -p -f _build/html

08/05/2023#

882. workflow 8.0#

I see, you’d like to automate the creation of a directory and file structure based on some predefined parameters, such as the number of acts, the number of files in each act, and the categories under “dramatispersonae”.

You can make your script more generic and less manual by using loops and arrays to generate these structures. Here’s a simplified version:

#!/bin/bash

cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
root_dir="alpha"
acts=( "Act I" "Act II" "Act III" "Act IV" "Act V" )
act_files=( 3 4 5 6 6 ) # change these values according to the number of files in each act
branches=( "high_school_students" "undergraduates" "graduates" "medical_students" "residents" "fellows" "faculty" "analysts" "staff" "collaborators" )

# Create the root directory
mkdir -p $root_dir

# Create intro.ipynb and prologue.ipynb
touch "$root_dir/intro.ipynb"
touch "$root_dir/prologue.ipynb"

# Create Act directories and their files
for ((i=0; i<${#acts[@]}; i++)); do
    act_dir="$root_dir/${acts[$i]}"
    mkdir -p $act_dir
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "$act_dir/act${i+1}_$j.ipynb"
    done
done

# Create the dramatispersonae directories and their files
for branch in "${branches[@]}"; do
    for ((i=1; i<=5; i++)); do
        sub_dir="$root_dir/dramatispersonae/$branch/${branch}_$i"
        mkdir -p $sub_dir
        touch "$sub_dir/${branch}_$i.ipynb"
    done
done

# additional directories and files can be generated following similar pattern
# ...

With this script, if you need to change the number of files in each Act, or add/remove Acts, or add/remove branches, you just need to modify the corresponding array at the top of the script, rather than adding/removing lines for each file/directory.

Please note that you’ll need to add additional lines for other folders like “Epilogue”, “Git & Spoke”, “Courses”, etc., following similar pattern to the “Act” folders or the “dramatispersonae” folders as appropriate.

I tried to cover the main points in your script and simplify the logic for those parts. If you have some more specific requirements, let me know and I can help you adjust the script.

881. Calvin#

  1. Stanford med school for the summer

  2. Pediatric cardiocular engineering

  3. Wet lab in this? Apprently so

  4. Focusing on congenital heart disease

  5. Creating extension to monitor heart function

  6. Glucose monitoring

  7. Here till August

  8. “Our school is very weird”: hurricane season causes school to start in August

  9. Andrew busy with immunology and is unable to meet & mentor Calvin

  10. Calvin is a good guy (according to GitHub CoPilot)

  11. I guess I concurred with CoPilot

  12. Because of Fena and CoPilot, I should be able to spend more time with Calvin

882. workflow 8.1#

#!/bin/bash

# inflection point!

# Set root directory
root_dir="$HOME/dropbox/1f.ἡἔρις,κ/1.ontology/be"

acts=("be_1" "be_2" "be_3" "be_4" "be_5")
act_files=(5 5 5 5 5) # change these values according to the number of files in each act
dramatis_personae=("dp_1" "dp_2" "dp_3" "dp_4" "dp_5" "dp_6" "dp_7" "dp_8" "dp_9")

# Create the root directory if not exists
mkdir -p $root_dir

# Create intro.ipynb and prologue.ipynb
touch "${root_dir}/be_0_0.ipynb"
touch "${root_dir}/be_0_1.ipynb"

# Create Act directories and their files
for ((i=0; i<${#acts[@]}; i++)); do
    act_dir="${root_dir}/${acts[$i]}"
    mkdir -p $act_dir
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "${act_dir}/${acts[$i]}_$j.ipynb"
    done
done

# Create the dramatis_personae directories and their files
for ((i=0; i<${#dramatis_personae[@]}; i++)); do
    for ((j=1; j<=5; j++)); do
        dp_dir="${root_dir}/dramatis_personae/${dramatis_personae[$i]}"
        mkdir -p $dp_dir
        touch "${dp_dir}/${dramatis_personae[$i]}_$j.ipynb"
    done
done

# additional directories and files can be generated following similar pattern

883. workflow 8.2#

#!/bin/bash

# inflection point!
# chmod +x ip.sh
# folders appeared in parent directory
# this should be fixed to be in the ./be/ directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology
root_dir="be"
acts=( "be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8) # change these values according to the number of files in each act
# dramatis_personae=( "dp_1" "dp_2" "dp_3" "dp_4" "dp_5" "dp_6" "dp_7" "dp_8" "dp_9")

# Create the root directory
mkdir -p $root_dir
# cd $root_dir

# Create intro.ipynb and prologue.ipynb
touch "$root_dir/be_0_0.ipynb"
touch "$root_dir/be_0_1.ipynb"

# Create Act directories and their files
for ((i=0; i<${#acts[@]}; i++)); do
    act_dir="$root_dir/${acts[$i]}"
    mkdir -p $act_dir
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "$act_dir/be_i${i+1}_$j.ipynb"
    done
done

# Create the dramatispersonae directories and their files
for branch in "${branches[@]}"; do
    for ((i=1; i<=5; i++)); do
        sub_dir="$root_dir/dp/$branch/${branch}_$i"
        mkdir -p $sub_dir
        touch "$sub_dir/${branch}_$i.ipynb"
    done
done

# additional directories and files can be generated following similar pattern
# ...

Herein we populate an empty directory with the following structure:

be
├── be_0
│   ├── be_0_0.ipynb
│   └── be_0_1.ipynb
├── be_1
│   ├── be_i1_1.ipynb
│   ├── be_i1_2.ipynb
│   └── be_i1_3.ipynb
├── be_2
│   ├── be_i2_1.ipynb
│   ├── be_i2_2.ipynb
│   ├── be_i2_3.ipynb
│   └── be_i2_4.ipynb
├── be_3
│   ├── be_i3_1.ipynb
│   ├── be_i3_2.ipynb
│   ├── be_i3_3.ipynb
│   ├── be_i3_4.ipynb
│   └── be_i3_5.ipynb
├── be_4
│   ├── be_i4_1.ipynb
│   ├── be_i4_2.ipynb
│   ├── be_i4_3.ipynb
│   ├── be_i4_4.ipynb
│   ├── be_i4_5.ipynb

So the _config.yml will look something like this:

# Site settings
title: "ἡἔρις,κ"
description: "ἡἔρις,κ"
baseurl: "/1f.ἡἔρις,κ"
url: "

# Build settings
markdown: kramdown
theme: jekyll-theme-cayman
plugins:
  - jekyll-feed
  - jekyll-seo-tag
  - jekyll-sitemap
  - jekyll-remote-theme
remote_theme: "mmistakes/minimal-mistakes"

And the _toc.yml will look something like this:

- title: "ἡἔρις,κ"
  url: /1f.ἡἔρις,κ/
  output: web
  folderitems:
    - title: "be"
      url: /1f.ἡἔρις,κ/be/
      output: web
      folderitems:
        - title: "be_0"
          url: /1f.ἡἔρις,κ/be/be_0/
          output: web
          folderitems:
            - title: "be_0_0"
              url: /1f.ἡἔρις,κ/be/be_0/be_0_0/
              output: web
            - title: "be_0_1"
              url: /1f.ἡἔρις,κ/be/be_0/be_0_1/
              output: web
        - title: "be_1"
          url: /1f.ἡἔρις,κ/be/be_1/
          output: web
          folderitems:
            - title: "be_i1_1"
              url: /1f.ἡἔρις,κ/be/be_1/be_i1_1/
              output: web
            - title: "be_i1_2"
              url: /1f.ἡἔρις,κ/be/be_1/be_i1_2/
              output: web
            - title: "be_i1_3"
              url: /1f.ἡἔρις,κ/be/be_1/be_i1_3/
              output: web
        - title: "be_2"
          url: /1f.ἡἔρις,κ/be/be_2/
          output: web
          folderitems:
            - title: "be_i2_1"
              url: /1f.ἡἔρις,κ/be/be_2/be_i2_1/
              output: web
            - title: "be_i2_2"
              url: /1f.ἡἔρις,κ/be/be_

All the above are 100% suggestions from Fena and CoPilot. I have no idea what I am doing. (The last statement is 100% from me. – no… from CoPilot)

884. ip.sh#

  • this signals an inflection point in human history

  • so I am going to call it ip.sh

  • chmod +x ip.sh

Certainly! I apologize for the confusion. Below is a complete and continuous script, combining everything you need to create directories, files, authentication keys, and push HTML content to the gh-pages branch of your repository.

#!/bin/bash

# Set up directories and paths
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Generate a new SSH key (replace with your email)
ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_trackrepos

# Start the ssh-agent and add the key
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_trackrepos

# Copy the public key to your clipboard
pbcopy < ~/.ssh/id_trackrepos.pub
echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "" > $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
    echo "- part: Part $(($i + 1))" >> $toc_file
    echo "  chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "    - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create __config.yml file
config_file="__config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
jupyter-book build .

# Clone the repository
git clone git@github.com:muzaale/fe.git ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe

# Create or switch to the gh-pages branch
git checkout -b gh-pages

# Copy the HTML files from the build directory to the repository
cp -r ~/dropbox/1f.ἡἔρις,κ/1.ontology/be/_build/html/* .

# Add, commit, and push changes to GitHub
git add .
git commit -m "0.0 Update Jupyter Book content"
git push -u origin gh-pages

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

You can save this script as a .sh file and run it using bash yourscript.sh. Make sure to follow the printed instruction to add the SSH public key to your GitHub account. Ensure that the necessary tools like jupyter-book, ssh-keygen, pbcopy, and git are installed on your system.

885. ip.sh-i2#

  • creates directories and files

  • creates _toc.yml

  • creates __config.yml

  • fails to build the book with Jupyter Book

  • clones the repository

  • creates or switches to the gh-pages branch

  • copies the HTML files from the build directory to the repository

  • adds, commits, and pushes changes to GitHub

#!/bin/bash

# Set up directories and paths
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Generate a new SSH key (replace with your email)
ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_befe
be 
be 
cat ~/.ssh/id_befe.pub
# Start the ssh-agent and add the key
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_befe
be
# Copy the public key to your clipboard
pbcopy < ~/.ssh/id_befe.pub
echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "" > $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
    echo "- part: Part $(($i + 1))" >> $toc_file
    echo "  chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "    - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create __config.yml file
config_file="__config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
jupyter-book build .

# Clone the repository
git clone git@github.com:muzaale/fe ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe

# Create or switch to the gh-pages branch
git checkout -b gh-pages

# Copy the HTML files from the build directory to the repository
cp -r ~/dropbox/1f.ἡἔρις,κ/1.ontology/be/_build/html/* .

# Add, commit, and push changes to GitHub
git add .
git commit -m "0.0 Update Jupyter Book content"
git push -u origin gh-pages

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

886. ip.sh-i3#

Certainly! Here’s the entire updated script, with the modification to check if the SSH keys already exist:

#!/bin/bash

# Set up directories and paths
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
    ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_befe
    # Additional commands if needed
    cat ~/.ssh/id_befe.pub
    # Start the ssh-agent and add the key
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_befe
    # Copy the public key to your clipboard
    pbcopy < ~/.ssh/id_befe.pub
    echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
    echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        touch "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "" > $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
    echo "- part: Part $(($i + 1))" >> $toc_file
    echo "  chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "    - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
jupyter-book build .

# Clone the repository
git clone git@github.com:muzaale/fe ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/fe

# Create or switch to the gh-pages branch
git checkout -b gh-pages

# Copy the HTML files from the build directory to the repository
cp -r ~/dropbox/1f.ἡἔρις,κ/1.ontology/be/_build/html/* .

# Add, commit, and push changes to GitHub
git add .
git commit -m "0.0 Update Jupyter Book content"
git push -u origin gh-pages

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

Note: I’ve fixed the name of the config file to _config.yml, as mentioned in your previous comment.


(base) d@Poseidon 1.ontology % ./ip.sh              
SSH keys already exist for this repository. Skipping key generation.
Running Jupyter-Book v0.15.1
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_book/cli/main.py", line 242, in build
    parse_toc_yaml(toc)
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sphinx_external_toc/parsing.py", line 90, in parse_toc_yaml
    return parse_toc_data(data)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sphinx_external_toc/parsing.py", line 100, in parse_toc_data
    raise MalformedError(f"toc is not a mapping: {type(data)}")
sphinx_external_toc.parsing.MalformedError: toc is not a mapping: <class 'list'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.11/bin/jupyter-book", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_book/cli/main.py", line 244, in build
    _error(
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_book/utils.py", line 48, in _error
    raise kind(box)
RuntimeError: 
===============================================================================

The Table of Contents file is malformed: toc is not a mapping: <class 'list'>
You may need to migrate from the old format, using:

        jupyter-book toc migrate /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/be/_toc.yml -o /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/be/_toc.yml

===============================================================================

Cloning into '/Users/d/dropbox/1f.ἡἔρις,κ/1.ontology/fe'...
warning: You appear to have cloned an empty repository.
Switched to a new branch 'gh-pages'
cp: /Users/d/dropbox/1f.ἡἔρις,κ/1.ontology/be/_build/html/*: No such file or directory
On branch gh-pages

Initial commit

nothing to commit (create/copy files and use "git add" to track)
error: src refspec gh-pages does not match any
error: failed to push some refs to 'github.com:muzaale/fe'
Jupyter Book content updated and pushed to muzaale/fe repository!
(base) d@Poseidon 1.ontology % 

The error you’re experiencing seems to originate from the _toc.yml file format. The error message indicates:

sphinx_external_toc.parsing.MalformedError: toc is not a mapping: <class 'list'>

From the _toc.yml contents you’ve shown, the format seems to be a list, which is the default for newer versions of Jupyter Book. This suggests that the version of Jupyter Book you’re using expects the older format, where the _toc.yml file starts with a dictionary (or “mapping”) rather than a list.

Here’s what you can do:

  1. Migrate the Table of Contents
    The error message suggests you may need to migrate the _toc.yml file using the following command:

    jupyter-book toc migrate /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/be/_toc.yml -o /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/be/_toc.yml
    
  2. Check the Jupyter Book Version
    Ensure you’re using a version of Jupyter Book that’s compatible with the _toc.yml format you’re employing. If you’re using an older version of Jupyter Book, consider updating it:

    pip install -U jupyter-book
    
  3. Manually Adjust the _toc.yml File
    If you’re sure you’re using a version of Jupyter Book that expects the newer format, then your _toc.yml appears to be correct. However, if you need the older format, you’ll need to nest everything under a root key:

    root:
      - part: Part 1
        chapters:
          - file: be_0/be_i1_1
          ...
    
  4. Ensure Correct Directory Structure
    Verify that the paths listed in _toc.yml (like be_0/be_i1_1) correspond to existing files or directories. If the files do not exist, the build process will fail.

  5. Errors during Repository Operations
    The script also seems to have issues during the Git operations. Specifically, it tries to push to a branch that doesn’t exist in the repository. Make sure the repository is correctly initialized and the branch you’re trying to push to (in this case, gh-pages) exists.

  6. File Copy Error
    The script tries to copy files from a directory (be/_build/html/*) that may not exist. Ensure that the jupyter-book build . command runs successfully before the script attempts to copy files.

After making these adjustments, re-run the script and see if it successfully builds and pushes the Jupyter Book.

887. ip.sh-i4#

this one actually works & creates a gh-pages branch, whose link otherwises appears as a 404

#!/bin/bash

# Set up directories and paths
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
    ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_befe
    # Additional commands if needed
    cat ~/.ssh/id_befe.pub
    # Start the ssh-agent and add the key
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_befe
    # Copy the public key to your clipboard
    pbcopy < ~/.ssh/id_befe.pub
    echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
    echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        cp ~/dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
    echo "  - caption: Part $(($i + 1))" >> $toc_file
    echo "    chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "      - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $root_dir
git clone https://github.com/muzaale/fe 
cp -r be/* fe
cd fe 
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin git@github.com:muzaale/fe
ssh-add ~/.ssh/id_befe
be
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

It looks like you’re working with a bash script to create directories and files for a Jupyter Book, configure SSH keys, and push the changes to a GitHub repository. This is quite a comprehensive script!

Here’s a breakdown of some of the main parts of the output:

  1. SSH Key Generation: Your script first checks whether an SSH key exists, and if not, it generates a new one. In this run, it found the existing keys, so it skipped the generation.

  2. Building the Jupyter Book: You attempted to build the book using Jupyter Book, but an error occurred. The error message indicates that the root file intro.rst was not found. You’ll need to fix this before proceeding. Ensure that the intro.rst file exists at the specified path or update the path accordingly in your configuration.

  3. Cloning the Repository: Your script then cloned the repository https://github.com/muzaale/fe, copied the content of the Jupyter Book to the cloned repository, and committed the changes.

  4. Pushing the Changes: You updated the remote URL to use SSH and then pushed the changes to the main branch. You also used ghp-import to push the build HTML, perhaps to update the GitHub Pages site.

  5. Bash Error: The last line in your script is be, which is causing an error because it’s not recognized as a command. You’ll want to investigate what this line is intended to do and correct it.

It seems that you’ve included the script itself at the end of the output. This gives a clear understanding of what you are trying to accomplish. Your script is setting up directories and files, building a Jupyter Book, cloning a repository, and pushing the updates.

Here’s what you might want to consider doing next:

  • Fix the issue with the missing intro.rst file and try building the Jupyter Book again.

  • Investigate the be line in your script to determine what it’s supposed to do and correct it.

  • Check the repository to make sure the updates were pushed correctly.

  • If you continue to encounter problems, breaking the script down into smaller parts and running them individually might make troubleshooting easier.

Feel free to provide more specific questions if you need help with any particular part!


Certainly! It looks like there was an error during the Jupyter Book build process, as the root file intro.rst was not found. Based on your _toc.yml file, it seems like you’re referring to an intro.ipynb file as the root. You may want to ensure that the file exists.

Here’s the updated script based on what you shared. I’ve corrected the root in the _toc.yml and updated the process accordingly. If there’s an error related to the missing intro.ipynb file, you’ll want to create or specify that file in the appropriate location.

#!/bin/bash

# Set up directories and paths
cd ~/Dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
    ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_befe
    # Additional commands if needed
    cat ~/.ssh/id_befe.pub
    # Start the ssh-agent and add the key
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_befe
    # Copy the public key to your clipboard
    pbcopy < ~/.ssh/id_befe.pub
    echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
    echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        cp ~/Dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file # Make sure this file exists
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
    echo "  - caption: Part $(($i + 1))" >> $toc_file
    echo "    chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "      - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $root_dir
git clone https://github.com/muzaale/fe 
cp -r be/* fe
cd fe 
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin git@github.com:muzaale/fe
ssh-add ~/.ssh/id_befe
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

Please make sure that the intro.ipynb file exists or update the root entry in _toc.yml to point to the correct root file for your book.

888. ip.sh-i6#

victory!

#!/bin/bash

# Set up directories and paths
cd ~/Dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Create the intro.ipynb file from populate_be.ipynb
cp ~/Dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
    ssh-keygen -t ed25519 -C "muzaale@gmail.com" -f ~/.ssh/id_befe
    cat ~/.ssh/id_befe.pub
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_befe
    pbcopy < ~/.ssh/id_befe.pub
    echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
    echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("be_0" "be_1" "be_2" "be_3" "be_4" "be_5" "be_6")
act_files=(2 3 4 5 6 7 8)

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
    mkdir -p ${acts[$i]}
    for ((j=1; j<=${act_files[$i]}; j++)); do
        cp ~/Dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb "${acts[$i]}/be_i$(($i + 1))_$j.ipynb"
    done
done

for ((i=0; i<${#acts[@]}; i++)); do
    echo "  - caption: Part $(($i + 1))" >> $toc_file
    echo "    chapters:" >> $toc_file
    for ((j=1; j<=${act_files[$i]}; j++)); do
        echo "      - file: ${acts[$i]}/be_i$(($i + 1))_$j" >> $toc_file
    done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $root_dir
git clone https://github.com/muzaale/fe 
cp -r be/* fe
cd fe 
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin git@github.com:muzaale/fe
ssh-add ~/.ssh/id_befe
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to muzaale/fe repository!"

889. ip.sh-spjd#

The revolution will be televised!


Certainly! Below is a modified version of the script that generalizes the GitHub username, repository name, and email address by taking them as inputs at the beginning of the script. This way, the script can be used for different users and repositories.

#!/bin/bash

# Input GitHub username, repository, and email address
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS

# Set up directories and paths
cd ~/Dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"
mkdir -p $root_dir
cd $root_dir

# Create the intro.ipynb file from populate_be.ipynb
cp ~/Dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
    ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f ~/.ssh/id_befe
    cat ~/.ssh/id_befe.pub
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_befe
    pbcopy < ~/.ssh/id_befe.pub
    echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
    echo "SSH keys already exist for this repository. Skipping key generation."
fi

# ... rest of the code is unchanged ...

# Build the book with Jupyter Book
cd ..
jb build $root_dir
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r be/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add ~/.ssh/id_befe
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

This script will prompt the user to enter their GitHub username, repository name, and email address, and then it will proceed to use those values throughout the script. Make sure that the user has appropriate permissions to the repository, and that the repository exists on GitHub.

890. spjdworkflow#

#!/bin/bash

# Input GitHub username, repository, and email address
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS

# Set up directories and paths
cd ~/Dropbox/1f.ἡἔρις,κ/1.ontology/
root_dir="be"

# mkdir -p $root_dir
# cd $root_dir

# Create the intro.ipynb file from populate_be.ipynb
# cp ~/Dropbox/1f.ἡἔρις,κ/1.ontology/populate_be.ipynb intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
# SSH_KEY_PATH="$HOME/.ssh/id_befe"
# if [ ! -f "$SSH_KEY_PATH" ]; then
#   ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f ~/.ssh/id_befe
#   cat ~/.ssh/id_befe.pub
#   eval "$(ssh-agent -s)"
#   ssh-add ~/.ssh/id_befe
#   pbcopy < ~/.ssh/id_befe.pub
#   echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
# else
#   echo "SSH keys already exist for this repository. Skipping key generation."
# fi

# ... rest of the code is unchanged ...

# Build the book with Jupyter Book
# cd ..
jb build $root_dir
# git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r be/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "spjd workflow"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add ~/.ssh/id_befe
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

891. ideas#

Note

1.1.1 8:55 PM

  1. Facebook - comparing two girls

  2. Instagram -number of followers

  3. TikTok- ability to go viral

  4. Yafe - how many clinicians endorse

  5. Philosophe - risk calculator(s)

Note

1.1.2

  1. Clinical question

  2. Variables

  3. Data source

  4. Regression

  5. WebApp

  6. Fena hosting

  7. Turnaround: a week

  8. Demo & workflow using Python/nhanes

  9. Hopkins doesn’t care for the weights

  10. Represent public health issue

  11. After clinical variance

  12. Hierarchical models

  13. See fena for

  14. Details

892. ip.sh-finale#

Certainly! Here’s the complete script, taking into account your requirements:

#!/bin/bash

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology/): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the path to the populate_be.ipynb file: " POPULATE_BE_PATH

# Set up directories and paths
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cd $SUBDIR_NAME

# Create the intro.ipynb file from populate_be.ipynb
cp $POPULATE_BE_PATH intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_befe"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f ~/.ssh/id_befe
  cat ~/.ssh/id_befe.pub
  eval "$(ssh-agent -s)"
  ssh-add ~/.ssh/id_befe
  pbcopy < ~/.ssh/id_befe.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 ~/.ssh/id_befe
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add ~/.ssh/id_befe
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

This script allows users to specify the GitHub username, repository name, email address, root directory, subdirectory name, and path to the populate_be.ipynb file. It then carries out the tasks of setting up directories, copying files, managing SSH keys, and pushing updates to a GitHub repository.

Make sure to have the necessary permissions, dependencies, and configurations properly set up in your environment for the script to execute all these tasks.


Variant of the script that includes source path for populate_be.ipynb file:

Sure! You want to update the SSH key filename to include both the subdirectory name and the GitHub repository name. Here’s the modified code:

#!/bin/bash

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology/): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the path to the populate_be.ipynb file: " POPULATE_BE_PATH

# Set up directories and paths
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cd $SUBDIR_NAME

# Create the intro.ipynb file from populate_be.ipynb
cp $POPULATE_BE_PATH intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_${SUBDIR_NAME}${REPO_NAME}"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f $SSH_KEY_PATH
  cat ${SSH_KEY_PATH}.pub
  eval "$(ssh-agent -s)"
  ssh-add $SSH_KEY_PATH
  pbcopy < ${SSH_KEY_PATH}.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 $SSH_KEY_PATH
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add $SSH_KEY_PATH
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

I replaced "id_befe" with "id_${SUBDIR_NAME}${REPO_NAME}" to include the subdirectory name and the GitHub repository name in the SSH key filename. Make sure that the SUBDIR_NAME and REPO_NAME variables are unique to avoid overwriting existing keys.

08/06/2023#

893. agift2me#

Certainly! Here’s the updated script, with “be_” and “be_i” replaced with the $SUBDIR_NAME variable:

#!/bin/bash

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology/): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the path to the populate_be.ipynb file: " POPULATE_BE_PATH

# Set up directories and paths
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cd $SUBDIR_NAME

# Create the intro.ipynb file from populate_be.ipynb
cp $POPULATE_BE_PATH intro.ipynb

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_${SUBDIR_NAME}${REPO_NAME}"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f $SSH_KEY_PATH
  cat ${SSH_KEY_PATH}.pub
  eval "$(ssh-agent -s)"
  ssh-add $SSH_KEY_PATH
  pbcopy < ${SSH_KEY_PATH}.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("${SUBDIR_NAME}_0" "${SUBDIR_NAME}_1" "${SUBDIR_NAME}_2" "${SUBDIR_NAME}_3" "${SUBDIR_NAME}_4" "${SUBDIR_NAME}_5" "${SUBDIR_NAME}_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
  mkdir -p ${acts[$i]}
  for ((j=1; j<=${act_files[$i]}; j++)); do
    cp $POPULATE_BE_PATH "${acts[$i]}/${SUBDIR_NAME}_i$(($i + 1))_$j.ipynb"
  done
done

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "1.1 Update Jupyter Book content"
chmod 600 $SSH_KEY_PATH
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add $SSH_KEY_PATH
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

This version utilizes the $SUBDIR_NAME variable throughout the script to refer to the specified subdirectory, providing consistency and adaptability to various use cases.

894. agift2me#

spjd note-taking & updates

Hide code cell source
import nbformat
import os

# Define the path to the input and output files
input_path = os.path.expanduser('~/dropbox/1f.ἡἔρις,κ/1.ontology/be/be_6/be_i7_8.ipynb')
output_path = os.path.expanduser('~/dropbox/1f.ἡἔρις,κ/1.ontology/populate_fe.ipynb')

# Read the existing notebook
with open(input_path, 'r', encoding='utf-8') as f:
    notebook = nbformat.read(f, as_version=4)

# Create a new notebook for the organized content
new_notebook = nbformat.v4.new_notebook()

# Copy all cells as markdown
for cell in notebook.cells:
    markdown_cell = nbformat.v4.new_markdown_cell(cell['source'])
    new_notebook.cells.append(markdown_cell)

# Save the new notebook
with open(output_path, 'w', encoding='utf-8') as f:
    nbformat.write(new_notebook, f)

print(f"Organized content saved to {output_path}")
Organized content saved to /Users/d/dropbox/1f.ἡἔρις,κ/1.ontology/populate_fe.ipynb

895. python#

done
installation finished.
==> Changing ownership of paths required by anaconda; your password may be necessary.
🍺  anaconda was successfully installed!
(base) d@Poseidon 1.ontology %    
# Installing Homebrew and Anaconda on a Mac

## Install Homebrew
Run the following command on your terminal to install Homebrew. 
Homebrew is a package manager for Macs and is used to install useful development tools and software.

`/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"`

## Install Anaconda through Homebrew
1. Run `brew install --cask anaconda` to install Anaconda
2. Run `echo 'export PATH=/usr/local/anaconda3/bin:$PATH' >> ~/.zshrc` from your terminal
3. Also run `echo 'export PATH=/opt/homebrew/anaconda3/bin:$PATH' >> ~/.zshrc` from your terminal
4. Run `source ~/.zshrc` from your terminal
5. Type `conda` to ensure that anaconda linked correctly.

#### If you use bash instead of zsh, replace steps 2 and 3 from above with the following:
- `echo 'export PATH=/usr/local/anaconda3/bin:$PATH' >> ~/.bash_profile`
- `echo 'export PATH=/opt/homebrew/anaconda3/bin:$PATH' >> ~/.bash_profile`
- `source ~/.bash_profile`


## If you've already installed anaconda from the installation file from anaconda.org
If you installed Anaconda for only this user, run the following:
- `echo 'export PATH=/Users/$USER/anaconda3/bin:$PATH' >> ~/.zshrc`
- `echo 'export PATH=/opt/homebrew/anaconda3/bin:$PATH' >> ~/.zshrc`
- `source ~/.zshrc`

If you installed Anaconda for all users on your computer, then run the following:
- `echo 'export PATH=/opt/anaconda3/bin:$PATH' >> ~/.zshrc`
- `source ~/.zshrc`

896. dailygrind#

Hide code cell source
import networkx as nx
import matplotlib.pyplot as plt


# Set seed for layout
seed = 42

# Directory structure
structure = {
    "Daily Grind": ["Gratitude", "Exercise", "WhatsApp", "Music", "Mentoring", "K08 Grant", "Mentoring"],
    "Gratitude": ["Journal", "Prayer", "Meditation", "Bible"],
    "Exercise": ["Swim", "Gym", "Jog", "Weights"],
    "WhatsApp": ["Family","Buddies","Colleagues"],
    "PhD Thesis": ["IRB", "Manuscripts", "Dissertation"],
    "K08 Grant": ["PhD Thesis", "R03 App"],
    "R03 App": ["R01", "K24", "U01"],
    "Mentoring": ["High School Students", "Undergraduates", "Graduate Students", "Medical Students", "Residents", "Fellows", "Faculty", "Analysts", "Staff", "Collaborators", "Graduates"],
    "Music": ["Gospel", "Piano", "Voice", "Hymns", "Classical", "Annuŋŋamya"],
}

G = nx.Graph()
node_colors = {}

# Function to capitalize first letter of each word
def capitalize_name(name):
    return ' '.join(word.capitalize() for word in name.split(" "))

# Add parent nodes with a default color
for parent in structure.keys():
    parent_name = capitalize_name(parent.replace("_", " "))
    G.add_node(parent_name)
    node_colors[parent_name] = 'lightblue'

for parent, children in structure.items():
    parent_name = capitalize_name(parent.replace("_", " "))
    for child in children:
        child_name = capitalize_name(child.replace("_", " "))
        G.add_edge(parent_name, child_name)
        # Override colors based on node type
        if child_name in structure["Exercise"]:
            node_colors[child_name] = 'lightgreen'
        elif child_name in structure["Gratitude"]:
            node_colors[child_name] = 'lavender'
        elif child_name in ["PhD Thesis", "K08 Grant", "R03 App"]:
            node_colors[child_name] = 'lightpink'
        elif child_name in structure["Music"]:
            node_colors[child_name] = 'lightyellow'
        elif parent == "WhatsApp" and child_name in structure[parent]:
            node_colors[child_name] = 'lightcoral'
        elif child_name in structure["Mentoring"]:
            node_colors[child_name] = 'lightskyblue'
        else:
            node_colors[child_name] = 'lightblue'


colors = [node_colors[node] for node in G.nodes()]

# Set figure size
plt.figure(figsize=(30, 30))

# Draw the graph
pos = nx.spring_layout(G, scale=30, seed=seed)
nx.draw_networkx_nodes(G, pos, node_size=10000, node_color=colors, edgecolors='black') # Boundary color set here
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(G, pos, font_size=20)
plt.show()
_images/4e07265568c6e11920243238eda5b8c9aa1db61377eb6005edb81b259d2a2481.png

Certainly! If you’re not concerned about exactly which Python environment you’re targeting and you simply want to install the anytree package into all discoverable environments on your system, you can create a shell script to attempt this.

Below is a bash script that finds all Python executables on your system and then uses them to install the anytree package via pip.

Please be aware that this script is somewhat aggressive and may change the environments of other projects on your system. Use with caution and understand what it’s doing before you execute it.

#!/bin/bash

# Find all python executables
pythons=$(find /usr /home /anaconda /opt -name 'python' -type f 2>/dev/null)

# Iterate through the found Python executables
for python in $pythons
do
  # Check if the executable corresponds to a Python interpreter
  if $python -c 'import sys; exit(sys.version_info < (3,))' 2>/dev/null; then
    echo "Found Python 3 interpreter at $python"
    
    # Try to install anytree using pip corresponding to the found Python interpreter
    echo "Installing anytree for $python"
    $python -m pip install anytree
  fi
done

echo "Done installing anytree for all found Python 3 interpreters."

To use this script, you would save it to a file (e.g., install-anytree.sh), make it executable, and then run it:

chmod +x install-anytree.sh
./install-anytree.sh

You would likely need to run this script with administrator permissions (e.g., using sudo on Linux) since some Python environments may be installed in system directories.

Important Note: This script attempts to find Python executables in common installation locations, and it may attempt to change environments that should not be altered (e.g., system Python). It’s meant as a “last resort” solution when the environment has become difficult to manage, and it should be used with an understanding of the potential impacts on your system. It may also require adaptation based on your specific system setup and preferences. Always back up important data and configurations before performing broad system changes like this.


Certainly! If the path to the Python interpreter that you want to target is within a virtual environment (e.g., myenv/bin/python), you can update the script to point to that specific path. Here’s an updated version of the script:

#!/bin/bash

# Path to the Python interpreter within your virtual environment
python_path="myenv/bin/python"

# Check if the specified path is indeed a Python 3 interpreter
if $python_path -c 'import sys; exit(sys.version_info < (3,))' 2>/dev/null; then
    echo "Found Python 3 interpreter at $python_path"

    # Install anytree using pip corresponding to the specified Python interpreter
    echo "Installing anytree for $python_path"
    $python_path -m pip install anytree
else
    echo "The specified path does not point to a Python 3 interpreter. Aborting."
fi

echo "Done."

Make sure to replace myenv with the full path to your virtual environment if it’s not in the same directory as the script.

Save this script to a file (e.g., install-anytree.sh), make it executable, and then run it:

chmod +x install-anytree.sh
./install-anytree.sh

This will install the anytree package specifically for the Python interpreter inside the virtual environment you’ve specified.

If you’re running your Jupyter Book within this virtual environment, you should now be able to import the anytree package without any errors. Make sure that your Jupyter Book is using this specific environment when building, and you should be good to go!


If /usr/local/anaconda3/bin/python is the specific Python interpreter that works and where you want to ensure that the anytree package is installed, you can create a simple shell script to install the package specifically for that interpreter.

Here’s a bash script:

#!/bin/bash

# Path to the Python interpreter that works
python_path="/usr/local/anaconda3/bin/python"

# Check if the specified path is indeed a Python 3 interpreter
if $python_path -c 'import sys; exit(sys.version_info < (3,))' 2>/dev/null; then
    echo "Found Python 3 interpreter at $python_path"

    # Install anytree using pip corresponding to the specified Python interpreter
    echo "Installing anytree for $python_path"
    $python_path -m pip install anytree
else
    echo "The specified path does not point to a Python 3 interpreter. Aborting."
fi

echo "Done."

You can save this script to a file (e.g., install-anytree.sh), make it executable, and then run it:

chmod +x install-anytree.sh
./install-anytree.sh

Since this script targets a specific interpreter, it should be less risky than the earlier, more aggressive script. If the Python interpreter at /usr/local/anaconda3/bin/python is part of a system installation, you might still need administrator permissions to run the script (e.g., using sudo on Linux).

This script should ensure that the anytree package is installed for the Python interpreter you’ve specified. If that’s the interpreter being used by your Jupyter Book, it should resolve the ModuleNotFoundError you were encountering.

Hide code cell source
!pip install treelib 
from treelib import Node, Tree
import sys
print(sys.executable)




# Define the directory structure
structure = {
    "Daily Grind": {
        "Gratitude": ["journal.md"],
        "Physical Exercise": ["swim.md", "gym.md", "jog.md"],
        "Check on Loved Ones": ["whats_app.md"],
        "Work": {
            "PhD Thesis": ["..."],
            "K08 Grant": ["..."],
            "R03 Grant Application": ["..."],
            "Mentoring": {
                "High School Students": [],
                "Undergraduates": [],
                "Graduate Students": [],
                "Medical Students": [],
                "Residents": [],
                "Fellows": [],
                "Faculty": [],
                "Analysts": [],
                "Staff": [],
                "Collaborators": [],
                "Graduates": [],
            },
            "...": []
        },
        "...": []
    }
}

counter = 0

# Function to recursively add nodes to the tree
def add_nodes(tree, parent, structure):
    global counter
    for key, value in structure.items():
        node_name = key.replace(" ", "_")
        if tree.contains(node_name):
            counter += 1
            node_name += "_" + str(counter) # Add suffix to make the node name unique
        tree.create_node(key, node_name, parent=parent)
        # If the value is a list, add file nodes
        if isinstance(value, list):
            for file_name in value:
                file_node_name = file_name if file_name != "..." else "ellipsis_" + str(counter)
                counter += 1
                tree.create_node(file_name, file_node_name, parent=node_name)
        # If the value is a dictionary, recurse into it
        elif isinstance(value, dict):
            add_nodes(tree, node_name, value)

# Create tree and root node
tree = Tree()
tree.create_node("Daily Grind", "Daily Grind")

# Add nodes based on the structure
add_nodes(tree, "Daily Grind", structure["Daily Grind"])

# Print the tree
tree.show()
Requirement already satisfied: treelib in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (1.6.4)
Requirement already satisfied: six in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from treelib) (1.16.0)
/Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/bin/python
Daily Grind
├── ...
├── Check on Loved Ones
│   └── whats_app.md
├── Gratitude
│   └── journal.md
├── Physical Exercise
│   ├── gym.md
│   ├── jog.md
│   └── swim.md
└── Work
    ├── ...
    ├── K08 Grant
    │   └── ...
    ├── Mentoring
    │   ├── Analysts
    │   ├── Collaborators
    │   ├── Faculty
    │   ├── Fellows
    │   ├── Graduate Students
    │   ├── Graduates
    │   ├── High School Students
    │   ├── Medical Students
    │   ├── Residents
    │   ├── Staff
    │   └── Undergraduates
    ├── PhD Thesis
    │   └── ...
    └── R03 Grant Application
        └── ...

The which python command shows that you are using the Python interpreter located at /usr/local/anaconda3/bin/python. This means that your terminal session is using the Python version installed through Anaconda in the (base) environment.

Here’s what you can do to ensure that Jupyter Notebook uses the same Python environment:

  1. Check the Python Path in Jupyter Notebook: You can check the Python path being used in the Jupyter Notebook by running:

    import sys
    print(sys.executable)
    

    If the path printed doesn’t match /usr/local/anaconda3/bin/python, then Jupyter is using a different Python interpreter.

  2. Install Jupyter in the Same Environment: If you find that Jupyter is using a different Python interpreter, you may want to install Jupyter within the (base) environment where treelib is installed:

    conda install jupyter
    
  3. Start Jupyter from the Same Environment: Make sure you are starting Jupyter Notebook from the terminal where the correct Python environment is activated. You can launch Jupyter by simply running:

    jupyter notebook
    

By ensuring that Jupyter Notebook is running in the same Python environment where treelib is installed, you should be able to import the library without any issues. If the problem persists, please provide more details, and I’ll be happy to assist further!

897. myenv#

In a Jupyter Notebook, you can select the kernel that corresponds to a specific Python environment. The kernel defines the programming environment in which the notebook runs. Here’s how you can make sure your .ipynb file uses the Python version located at /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/bin/python:

  1. Install the IPython Kernel in Your Virtual Environment: Make sure that your virtual environment (in this case, myenv) is activated and that the IPython kernel module is installed in it.

    You can install it using:

    pip install ipykernel
    
  2. Add Your Virtual Environment to Jupyter: You need to add a new kernel specification that points to your virtual environment’s Python binary. Run the following command:

    python -m ipykernel install --user --name=myenv
    

    This will create a new kernel specification that uses the Python interpreter from your virtual environment.

  3. Select the Kernel in Your Notebook:

    • Open your .ipynb file in Jupyter.

    • From the menu, choose “Kernel” > “Change kernel” > “myenv” (or whatever name you provided in the above command).

    • Your notebook will now be using the selected Python version from your virtual environment.

  4. Verify the Python Version: You can verify that you’re using the correct version of Python by running the following cell in your notebook:

    import sys
    print(sys.executable)
    

    It should print the path to the Python executable in your virtual environment.

By following these steps, you can ensure that your notebook uses the specific Python version you want, and any HTML export of the notebook will reflect this environment as well.

import sys
print(sys.executable)
/Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/bin/python
import nbformat
import glob
import os

def change_kernel(notebook_path, kernel_name):
    with open(notebook_path) as f:
        nb = nbformat.read(f, as_version=4)

    nb.metadata.kernelspec = {
        "name": kernel_name,
        "language": "python",
        "display_name": kernel_name
    }

    with open(notebook_path, 'w') as f:
        nbformat.write(nb, f)

kernel_name = "myenv"
notebook_directory = "bloc" # Path to the directory containing your notebooks

# This line will search for all .ipynb files within the specified directory and its subdirectories
notebooks = glob.glob(os.path.join(notebook_directory, '**/*.ipynb'), recursive=True)

for notebook in notebooks:
    change_kernel(notebook, kernel_name)

print(f"Updated kernel to '{kernel_name}' for {len(notebooks)} notebooks.")

-rw-r--r--@ 1 d  staff    34105 Jul 25 20:09 bloc/bdn202301.ipynb
-rw-r--r--@ 1 d  staff    47849 Jul 25 20:12 bloc/bdn202302.ipynb
-rw-r--r--@ 1 d  staff     7841 Jul 18 08:03 bloc/bdn202303.ipynb
-rw-r--r--@ 1 d  staff    41438 Jul 26 10:55 bloc/bdn202304.ipynb
-rw-r--r--@ 1 d  staff   875558 Jul 27 12:04 bloc/bdn202305.ipynb
-rw-r--r--@ 1 d  staff  2796060 Jul 26 13:26 bloc/bdn202306.ipynb
-rw-r--r--@ 1 d  staff   738204 Jul 31 20:58 bloc/bdn202307.ipynb
-rw-r--r--@ 1 d  staff   868015 Aug  6 16:13 bloc/bdn202308.ipynb
-rw-r--r--@ 1 d  staff      214 Jul 18 05:51 bloc/bdn202309.ipynb
-rw-r--r--@ 1 d  staff      214 Jul 18 05:51 bloc/bdn202310.ipynb
-rw-r--r--@ 1 d  staff      214 Jul 18 05:51 bloc/bdn202311.ipynb
-rw-r--r--@ 1 d  staff      214 Jul 18 05:51 bloc/bdn202312.ipynb
(myenv) (base) d@Poseidon 1.ontology % 
(myenv) (base) d@Poseidon 1.ontology % python myscript.py
Updated kernel to 'myenv' for 489 notebooks.
(myenv) (base) d@Poseidon 1.ontology % 
(myenv) (base) d@Poseidon 1.ontology % ls -l
total 160
drwxr-xr-x@  21 d  staff    672 Aug  4 16:47 alpha
drwxr-xr-x@  14 d  staff    448 Aug  5 20:16 be
drwxr-xr-x@  23 d  staff    736 Aug  6 01:30 beta
drwxr-xr-x@  21 d  staff    672 Aug  4 00:38 blank
drwxr-xr-x@ 277 d  staff   8864 Aug  6 11:38 bloc
drwxr-xr-x@  22 d  staff    704 Aug  4 08:16 canvas
drwxr-xr-x@ 280 d  staff   8960 Aug  3 18:46 denotas
drwxr-xr-x@  15 d  staff    480 Aug  5 20:20 fe
drwxr-xr-x@  15 d  staff    480 Aug  1 14:43 fenagas
-rwxr-xr-x@   1 d  staff   1932 Aug  4 11:07 gc.sh
-rwxr-xr-x@   1 d  staff   1524 Aug  5 19:52 ip.sh
drwxr-xr-x@  29 d  staff    928 Jul 20 20:26 libro
drwxr-xr-x@ 144 d  staff   4608 Jun 23 23:20 livre
drwxr-xr-x@  14 d  staff    448 Aug  4 12:21 llc
drwxr-xr-x@  20 d  staff    640 Aug  2 13:18 mb
drwxr-xr-x@   7 d  staff    224 Aug  6 07:33 myenv
-rw-r--r--@   1 d  staff    802 Aug  6 16:20 myscript.py
drwxr-xr-x@  22 d  staff    704 Aug  4 08:16 og
-rw-r--r--@   1 d  staff    633 Aug  6 02:34 populate_be.ipynb
-rw-r--r--@   1 d  staff  61138 Aug  6 16:14 populate_fe.ipynb
-rwxr-xr-x@   1 d  staff    618 Aug  6 16:20 random.sh
drwxr-xr-x@  15 d  staff    480 Jul 31 01:05 repos
drwxr-xr-x@  18 d  staff    576 Jul 18 10:57 spring
drwxr-xr-x@ 139 d  staff   4448 Jun 25 08:29 summer
drwxr-xr-x@  14 d  staff    448 Jul 31 06:24 track
drwxr-xr-x@  25 d  staff    800 Jul 20 20:21 verano

898. fena#

Hide code cell source
import networkx as nx
import matplotlib.pyplot as plt

# Set seed for layout
seed = 2 

# Directory structure
structure = {
    "Fena": ["Epilogue", "Project", "Skills", "Dramatis Personae", "Challenges"],
    "Numbers": ["Variance", "R01", "K24", "U01"],
    "Epilogue": ["Open-Science", "Self-Publish", "Peer-Reviewed", "Grants", "Proposals"],
    "Skills": ["Python", "AI", "R", "Stata", "Numbers"],
    "AI": ["ChatGPT", "Co-Pilot"],
    "Project": ["Manuscript", "Code", "Git"],
    "Estimates": ["Nonparametric", "Semiparametric", "Parametric", "Simulation", "Uses/Abuses"],
    "Numbers": ["Estimates", "Variance"],
    "Variance": ["Oneway", "Twoway", "Multivariable", "Hierarchical", "Clinical", "Public"],
    "Dramatis Personae": ["High School Students", "Undergraduates", "Graduate Students", "Medical Students", "Residents", "Fellows", "Faculty", "Analysts", "Staff", "Collaborators", "Graduates"],
    "Challenges": ["Truth", "Rigor", "Error", "Sloppiness", "Fraud", "Learning"],
}

# Gentle colors for children
child_colors = ["lightgreen", "lightpink", "lightyellow",
    'lavender', 'lightcoral', 'honeydew', 'azure','lightblue', 
]

# 'lightsteelblue', 'lightgray', 'mintcream','mintcream', 'azure', 'linen', 'aliceblue', 'lemonchiffon', 'mistyrose'

# List of nodes to color light blue
light_blue_nodes = ["Epilogue", "Skills", "Dramatis Personae", "Project", "Challenges"]

G = nx.Graph()
node_colors = {}


# Function to capitalize the first letter of each word
def capitalize_name(name):
    return ' '.join(word.capitalize() for word in name.split(" "))

# Assign colors to nodes
for i, (parent, children) in enumerate(structure.items()):
    parent_name = capitalize_name(parent.replace("_", " "))
    G.add_node(parent_name)
    
    # Set the color for Fena
    if parent_name == "Fena":
        node_colors[parent_name] = 'lightgray'
    else:
        node_colors[parent_name] = child_colors[i % len(child_colors)]
        
    for child in children:
        child_name = capitalize_name(child.replace("_", " "))
        G.add_edge(parent_name, child_name)
        if child_name in light_blue_nodes:
            node_colors[child_name] = 'lightblue'
        else:
            node_colors[child_name] = child_colors[(i + 6) % len(child_colors)]  # You can customize the logic here to assign colors


colors = [node_colors[node] for node in G.nodes()]

# Set figure size
plt.figure(figsize=(30, 30))

# Draw the graph
pos = nx.spring_layout(G, scale=30, seed=seed)
nx.draw_networkx_nodes(G, pos, node_size=10000, node_color=colors, edgecolors='black')  # Boundary color set here
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(G, pos, font_size=20)
plt.show()
_images/185f4dfc8fee4cd6d6e20986bbaaf1fe8f70197fc303d67a79b30aa7a1578933.png

899. idiomatic#

Certainly! Below is the full table with all the languages and their idiomatic translations for “We are all in this together.” As mentioned previously, some of these translations might not perfectly capture the idiomatic meaning, so consulting with native speakers would be ideal.

Hide code cell source
from IPython.display import HTML

# Create data
languages = [
    'English', 'Luganda', 'Spanish', 'French', 'German', 'Italian', 'Portuguese', 'Dutch', # ... other languages
    'Russian', 'Hindi', 'Persian', 'Japanese','Arabic', 'Hebrew', 'Swahili', 'Zulu',
    'Yoruba', 'Igbo', 'Korean', 'Finnish','Amharic', 'Oromo', 'Tigrinya', 'Gujarati',' Chinese'
]

translations = [
    'We are all in this together', 'Yaffe FFena', 'Todos estamos en esto juntos', 'Nous sommes tous dans le même bateau', # ... other translations
    'Wir sitzen alle im selben Boot','Siamo tutti nella stessa barca','Estamos todos juntos nisso','We zitten hier allemaal samen in','Мы все в этом вместе',  
    'हम सब इसमें साथ हैं','ما همه در این هستیم به همراه','これは私たちみんなのものです','نحن جميعًا في هذا معًا',
    'אנחנו כולנו בכלל בזה יחד', 'Sisi sote tuko pamoja katika hili','Sisonke kule','Awa gbogbo ni lori e pelu','Anyị nile nọ n’ime ya', 
    '우리는 모두 함께 이것에 있습니다', 'Olemme kaikki tässä yhdessä','እኛ ሁሉም በዚህ ተባበርንዋል','Hinqabu fi hinqabu jechuun','ምንም ነገርና እኛ በእኛ',
    'આપણે બધા આમાં જ છીએ','我们同舟共济'
]

# Variables to control the width of each column
column1_width = "50%"
column2_width = "50%"

# Variable to control table style ("plain" for plain text table)
table_style = "plain"

# Create HTML table with custom styles
if table_style == "plain":
    html_table = '<table>'
else:
    html_table = '<table style="border-collapse: collapse; width: 100%; margin-left: auto; margin-right: auto;">'

for lang, trans in zip(languages, translations):
    if table_style == "plain":
        html_table += f'<tr><td>{lang}</td><td>{trans}</td></tr>'
    else:
        html_table += f'<tr><td style="width: {column1_width}; border: none; text-align: center;">{lang}</td><td style="width: {column2_width}; border: none; text-align: center;">{trans}</td></tr>'
html_table += '</table>'

# Display the HTML table
display(HTML(html_table))
EnglishWe are all in this together
LugandaYaffe FFena
SpanishTodos estamos en esto juntos
FrenchNous sommes tous dans le même bateau
GermanWir sitzen alle im selben Boot
ItalianSiamo tutti nella stessa barca
PortugueseEstamos todos juntos nisso
DutchWe zitten hier allemaal samen in
RussianМы все в этом вместе
Hindiहम सब इसमें साथ हैं
Persianما همه در این هستیم به همراه
Japaneseこれは私たちみんなのものです
Arabicنحن جميعًا في هذا معًا
Hebrewאנחנו כולנו בכלל בזה יחד
SwahiliSisi sote tuko pamoja katika hili
ZuluSisonke kule
YorubaAwa gbogbo ni lori e pelu
IgboAnyị nile nọ n’ime ya
Korean우리는 모두 함께 이것에 있습니다
FinnishOlemme kaikki tässä yhdessä
Amharicእኛ ሁሉም በዚህ ተባበርንዋል
OromoHinqabu fi hinqabu jechuun
Tigrinyaምንም ነገርና እኛ በእኛ
Gujaratiઆપણે બધા આમાં જ છીએ
Chinese我们同舟共济

Again, these translations were made with the understanding of the idiomatic nature of the phrase, but variations and subtleties may exist within each language and culture. For example, in the case of the Chinese translation, the phrase “同舟共济” is a common idiom that means “to work together in times of trouble.” However, the literal translation of the phrase is “to share the same boat.” This is a great example of how the literal translation of a phrase may not always capture the full meaning of the phrase.

900. epilogue#

epilogue n. a short concluding section at the end of a literary work, often dealing with the future of its characters after the main action of the plot is completed.


and thats what co-pilot would have us believe. however, the word epilogue is also used in the context of a speech or a play. in this case, it is a short speech at the end of a play or a speech.


No epilogue, I pray you; for your play needs no excuse.

epilogue n. a short speech at the end of a play or a speech.


but i’m inspired by the fena image in 898 above. epilogue is really the daily grind:

  1. folks come to fena from all walks of life

  2. then they take on a project

  3. challenges loom large

  4. but the idea is to overcome them

  5. and that involves acquiring new skills

  6. we can’t do it alone so we work together

    • fena is a community after all

  7. and we learn from each other

    • more novel we recruite ai each step of the way

    • chatbots, gpt-3, co-pilot, etc.

    • the prospect of exponential growth is exciting

  8. of course we can talk of all the world being a stage

    • but we are not actors

    • we are real people

    • and we are not playing a part

    • we are living our lives

    • and we are doing it together

    • and we are doing it in the open

    • and we are doing it for the world to see

  9. all actors may have their exits and entrances

    • but ours is a daily grind

    • and we are in it for the long haul

    • there’ll always be a tomorrow, and a tomorrow, and a tomorrow

    • repeat, iteration, etc

    • and so epilogue is really the daily grind:

      • challenge-levels against skill-levels

      • exponential growth of both

      • as if we were in a game

      • generative adversarial networks

        • and we are the players

        • and we are the game

        • and we are the game masters

        • and we are the game makers

        • and we are the game changers

08/07/2023#

901. colab#

  • chat in plain english with a computer and ask for python code

  • then copy & paste that python code into a colab notebook

  • run the code in colab and see the results

  • if you love them, link them to your github repo

  • you’ll have just launched yourself into the future

  • modify the colab or gpt code to do something else

  • you’ll find that copilot autocompletes your code

  • now you’ll have incorporated two ai’s into your workflow

  • without a double you’ll be able to do more in less time & enjoy it more

  • and if you’ve never coded in your life, you’ll be able to do it now

902. epilogue#

a deeply held belief of mine is that the future is already here, it’s just not evenly distributed. i’ve been working on this project for a while now and i’m excited to share it with you. i hope you enjoy it as much as i do.

903. images#

  • github copilot suggested that i don’t worry about local filepaths and just use urls

  • so i’m going to try that here

  • neat, innit?

    1. visit your repo

    2. locate the image you want to use

    3. click on it

    4. click on the raw button

    5. copy the url

    6. paste it into your notebook

    7. add an exclamation point in front of it

    8. run the cell

    9. voila

904. learning#

  1. Supervised, \(Y\): Trained on labeled data, the algorithm learns a function that maps inputs to desired outputs.

  2. Unsupervised, \(X\): Trained on unlabeled data, the algorithm tries to find hidden patterns and structures within the data.

  3. Quasisupervised, \(\beta\): Utilizes both labeled and unlabeled data to improve learning efficiency and performance.

  4. Reinforcement, \(\epsilon\): The algorithm learns to make decisions by interacting with an environment, receiving feedback as rewards or penalties.

  5. Transfer, \(z\): This involves taking knowledge gained from one task and applying it to a related, but different task, often improving learning efficiency in the new task.

  6. Generative adversarial networks, \(\rho\): A part of unsupervised learning, where two networks (generator and discriminator) are trained together competitively. The generator creates data, while the discriminator evaluates it. They are trained together, often leading to the generation of very realistic data.

905. fena#

O. dramatis personae - players

  1. project

    • git

    • code

    • manuscript

  2. skills:

    • 2.1 computing

      • python

      • ai

      • r

      • stata

      • numeracy

    • 2.2 estimates

      • nonparametric

      • semiparametric

      • parametric

      • simulation

      • users/abuses

    • 2.3 variance

      • oneway

      • twoway

      • multivariable

      • hierarchical

      • clinical

      • public

  3. challenges:

    • truth

    • rigor

    • error

    • sloppiness

    • fraud

    • learning

    • literacy

N. epilogue - the daily grind

906. directory-structure#

I want a directory tree that looks like so: # Display the directory tree
echo "Directory Structure:"
echo "-------------------"
echo "alpha/
├── intro.ipynb
├── prologue.ipynb
├── Act I/
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── dramatis_personae/
    ├── high_school_students/
    │   ├── high_school_students_1/
    │   │   └── ...
    │   ├── high_school_students_2/
    │   │   └── ...
    │   ├── high_school_students_3/
    │   │   └── ...
    │   ├── high_school_students_4/
    │   │   └── ...
    │   └── high_school_students_5/
    │       └── ...
    ├── under_grads/
    │   ├── under_grads_1/
    │   │   └── ...
    │   ├── under_grads_2/
    │   │   └── ...
    │   ├── under_grads_3/
    │   │   └── ...
    │   ├── under_grads_4/
    │   │   └── ...
    │   └── under_grads_5/
    │       └── ...
    ├── grad_students/
    │   ├── grad_students_1/
    │   │   └── ...
    │   ├── grad_students_2/
    │   │   └── ...
    │   ├── grad_students_3/
    │   │   └── ...
    │   ├── grad_students_4/
    │   │   └── ...
    │   └── grad_students_5/
    │       └── ...
    ├── graduates/
    │   ├── graduates_1/
    │   │   └── ...
    │   ├── graduates_2/
    │   │   └── ...
    │   ├── graduates_3/
    │   │   └── ...
    │   ├── graduates_4/
    │   │   └── ...
    │   └── graduates_5/
    │       └── ...
    ├── medical_students/
    │   ├── medical_students_1/
    │   │   └── ...
    │   ├── medical_students_2/
    │   │   └── ...
    │   ├── medical_students_3/
    │   │   └── ...
    │   ├── medical_students_4/
    │   │   └── ...
    │   └── medical_students_5/
    │       └── ...
    ├── residents/
    │   ├── residents_1/
    │   │   └── ...
    │   ├── residents_2/
    │   │   └── ...
    │   ├── residents_3/
    │   │   └── ...
    │   ├── residents_4/
    │   │   └── ...
    │   └── residents_5/
    │       └── ...
    ├── fellows/
    │   ├── fellows_1/
    │   │   └── ...
    │   ├── fellows_2/
    │   │   └── ...
    │   ├── fellows_3/
    │   │   └── ...
    │   ├── fellows_4/
    │   │   └── ...
    │   └── fellows_5/
    │       └── ...
    ├── faculty/
    │   ├── faculty_1/
    │   │   └── ...
    │   ├── faculty_2/
    │   │   └── ...
    │   ├── faculty_3/
    │   │   └── ...
    │   ├── faculty_4/
    │   │   └── ...
    │   └── faculty_5/
    │       └── ...
    ├── analysts/
    │   ├── analysts_1/
    │   │   └── ...
    │   ├── analysts_2/
    │   │   └── ...
    │   ├── analysts_3/
    │   │   └── ...
    │   ├── analysts_4/
    │   │   └── ...
    │   └── analysts_5/
    │       └── ...
    ├── staff/
    │   ├── staff_1/
    │   │   └── ...
    │   ├── staff_2/
    │   │   └── ...
    │   ├── staff_3/
    │   │   └── ...
    │   ├── staff_4/
    │   │   └── ...
    │   └── staff_5/
    │       └── ...
    └── collaborators/
        ├── collaborators_1/
        │   └── ...
        ├── collaborators_2/
        │   └── ...
        ├── collaborators_3/
        │   └── ...
        ├── collaborators_4/
        │   └── ...
        └── collaborators_5/
            └── ..."

Certainly! If you want to represent this directory structure in a text format, you can simply print the structure as you provided. It represents a complex directory tree with multiple folders and subfolders.

You can include this output directly in your documentation or a README file to explain the structure to readers.

Here’s a Python code snippet that will print out the directory structure exactly as you provided:

directory_structure = """
Directory Structure:
-------------------
alpha/
├── intro.ipynb
├── prologue.ipynb
├── Act I/
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── dramatis_personae/
    ├── high_school_students/
    │   ├── high_school_students_1/
    │   │   └── ...
    │   ├── high_school_students_2/
    │   │   └── ...
    │   ├── high_school_students_3/
    │   │   └── ...
    │   ├── high_school_students_4/
    │   │   └── ...
    │   └── high_school_students_5/
    │       └── ...
    ├── under_grads/
    │   ├── under_grads_1/
    │   │   └── ...
    │   ├── under_grads_2/
    │   │   └── ...
    │   ├── under_grads_3/
    │   │   └── ...
    │   ├── under_grads_4/
    │   │   └── ...
    │   └── under_grads_5/
    │       └── ...
    ├── grad_students/
    │   ├── grad_students_1/
    │   │   └── ...
    │   ├── grad_students_2/
    │   │   └── ...
    │   ├── grad_students_3/
    │   │   └── ...
    │   ├── grad_students_4/
    │   │   └── ...
    │   └── grad_students_5/
    │       └── ...
    ├── graduates/
    │   ├── graduates_1/
    │   │   └── ...
    │   ├── graduates_2/
    │   │   └── ...
    │   ├── graduates_3/
    │   │   └── ...
    │   ├── graduates_4/
    │   │   └── ...
    │   └── graduates_5/
    │       └── ...
    ├── medical_students/
    │   ├── medical_students_1/
    │   │   └── ...
    │   ├── medical_students_2/
    │   │   └── ...
    │   ├── medical_students_3/
    │   │   └── ...
    │   ├── medical_students_4/
    │   │   └── ...
    │   └── medical_students_5/
    │       └── ...
    ├── residents/
    │   ├── residents_1/
    │   │   └── ...
    │   ├── residents_2/
    │   │   └── ...
    │   ├── residents_3/
    │   │   └── ...
    │   ├── residents_4/
    │   │   └── ...
    │   └── residents_5/
    │       └── ...
    ├── fellows/
    │   ├── fellows_1/
    │   │   └── ...
    │   ├──


It looks like you’re encountering an error because Graphviz’s command-line tool is not found in your system’s PATH. The Python package relies on the underlying Graphviz executables, so you’ll need to install them.

Here’s how to install Graphviz on various systems:

For Ubuntu/Debian:#

sudo apt-get install graphviz

For macOS (using Homebrew):#

brew install graphviz

For Windows:#

Download and install from the Graphviz website.

After installing Graphviz, if you’re still facing issues, you may need to manually add the path to the Graphviz executables to your system’s PATH environment variable. This process can vary depending on your operating system, so please let me know if you need specific guidance for your OS.

Restarting your Python environment (or your computer) might also be necessary to refresh the system’s PATH variable. Try running the code again after installing Graphviz and potentially restarting, and let me know if you still face any issues!


Got it! It seems like you want to create a representation of the directory structure you posted earlier, and perhaps manipulate it in Python.

While Python doesn’t have a built-in library specifically for drawing directory trees, you can use a combination of packages to achieve this. Here’s a step-by-step guide:

  1. Install Graphviz: Graphviz is an open-source tool for rendering directed graphs. It can be called from Python using the graphviz package.

  2. Install the Python graphviz Package: You can install the Graphviz Python package using pip:

    pip install graphviz
    
  3. Create the Directory Structure: Here’s a Python script that represents the directory structure you posted earlier using Graphviz. This code will create a visualization of the directory tree.

Hide code cell source
from graphviz import Digraph

def add_directory(graph, path, label=None):
    if label is None:
        label = path.split('/')[-1]
    graph.node(path, label)
    return path

def add_file(graph, path):
    label = path.split('/')[-1]
    graph.node(path, label, shape='box')
    return path

dot = Digraph('DirectoryStructure')
dot.attr(rankdir= 'LR', size='10,10')
# dot.attr(rankdir= 'TB', size='10,10')

alpha = add_directory(dot, 'alpha')
intro = add_file(dot, 'alpha/intro.ipynb')
prologue = add_file(dot, 'alpha/prologue.ipynb')

acts = ['I', 'II', 'III', 'IV', 'V']
for act in acts:
    act_dir = add_directory(dot, f'alpha/Act {act}', f'Act {act}')
    dot.edge(alpha, act_dir)
    for i in range(1, 4): # Add more or fewer depending on the act
        act_file = add_file(dot, f'alpha/Act {act}/act{act.lower()}_{i}.ipynb')
        dot.edge(act_dir, act_file)

# Continue adding directories and files as needed

# dot.view() # Opens the graph in the default PDF viewer

# Specify the path where you want the PDF to be saved
output_path = 'bloc/_toc_jb_dir_structure'

# Render the dot object to the specified path
dot.render(output_path, format='pdf')

This code creates a visual representation of the directory structure you described earlier (though it is partial and would need to be expanded to represent the entire structure).

You can further manipulate this structure using the Graphviz package or even use it to create actual directories and files on your system using the os and shutil libraries.

Let me know if you need further assistance with this!


907. gmu#

  1. alphabet

  2. scale3

  3. formula

  4. intervals

  5. buildchords

  6. diatonicchords

  7. tensions

  8. moretensions

  9. emotionalgrid

  10. chordstacksys

  11. reharm

  12. songs

  13. modelinterchange

908. gammadelta/gd.sh#

#!/bin/bash

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the name of the populate_be.ipynb file in ROOT_DIR: " POPULATE_BE

# Set up directories and paths
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cp $POPULATE_BE $SUBDIR_NAME/intro.ipynb
cd $SUBDIR_NAME

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_${SUBDIR_NAME}${REPO_NAME}"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f $SSH_KEY_PATH
  cat ${SSH_KEY_PATH}.pub
  eval "$(ssh-agent -s)"
  ssh-add $SSH_KEY_PATH
  pbcopy < ${SSH_KEY_PATH}.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("${SUBDIR_NAME}_0" "${SUBDIR_NAME}_1" "${SUBDIR_NAME}_2" "${SUBDIR_NAME}_3" "${SUBDIR_NAME}_4" "${SUBDIR_NAME}_5" 
"${SUBDIR_NAME}_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
  mkdir -p ${acts[$i]}
  for ((j=1; j<=${act_files[$i]}; j++)); do
    cp "intro.ipynb" "${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb"
  done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file # Make sure this file exists
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
  echo "  - caption: Part $(($i + 1))" >> $toc_file
  echo "    chapters:" >> $toc_file
  for ((j=1; j<=${act_files[$i]}; j++)); do
    echo "      - file: ${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb" >> $toc_file
  done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "now have gamma-delta & ga-de to play with"
chmod 600 $SSH_KEY_PATH
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add $SSH_KEY_PATH
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"
  1. the most beautiful program i’ve ever created

  2. can deploy a book from start-to-finish in 30sec

  3. its built from the ground up to be a book

  4. next step is to transfer .git commit history to here

  5. once that is done i can start to build an empire

909. thanku,next!#

#!/bin/bash

set -e  # Stop on any error

# Variables
OG_REPO=${1:-"https://github.com/afecdvi/og"}
CANVAS_REPO=${2:-"git@github.com:muzaale/canvas"}
SSH_KEY=${3:-"$HOME/.ssh/id_blankcanvas"}
FILENAME=${4:-"seasons.docx"}
BRANCH_NAME=${5:-"merge_seasons_docx"}

# Ensure git is installed
if ! command -v git &> /dev/null; then
    echo "WARNING: git could not be found. Please install git."
    exit 1
fi

# Set up SSH
echo "Setting up SSH..."
eval "$(ssh-agent -s)"
chmod 600 $SSH_KEY
ssh-add -D
ssh-add $SSH_KEY

# Navigate to the working directory
cd ~/dropbox/1f.ἡἔρις,κ/1.ontology

# Clone the 'og' repository and filter its history
echo "Cloning 'og' repository and filtering history for $FILENAME..."
TEMP_DIR="og_temp_$(date +%s)"
rm -rf $TEMP_DIR
git clone $OG_REPO $TEMP_DIR
cd $TEMP_DIR
git filter-branch --prune-empty --index-filter "
    git rm --cached --ignore-unmatch *;
    if [ -f '$FILENAME' ]; then
        git add $FILENAME;
    fi
" main

# Return to main directory
cd ../

# Check if the 'canvas' repository is present and if not, clone it
echo "Processing 'canvas' repository..."
if [ ! -d "canvas" ]; then
    git clone $CANVAS_REPO canvas
fi

cd canvas

# Recovery steps if HEAD is disconnected
if ! git symbolic-ref HEAD &> /dev/null; then
    echo "Reconnecting HEAD..."
    echo ref: refs/heads/main > .git/HEAD
fi

# Add the filtered 'og' repo as a remote and fetch its commits
if ! git remote | grep -q "og_temp"; then
    git remote add og_temp ../$TEMP_DIR
fi
git fetch og_temp

# Merge the filtered branch into a new branch
git checkout -b $BRANCH_NAME
git merge og_temp/main --allow-unrelated-histories

# If merge conflict occurs
if [ "$(git ls-files -u | wc -l)" -ne 0 ]; then
    echo "ERROR: Merge conflict detected. Please resolve manually."
    exit 1
fi

git push origin $BRANCH_NAME

# Cleanup
git remote remove og_temp
cd ..
rm -rf $TEMP_DIR

echo "Script executed successfully!"

The gchist.sh script achieves a series of tasks related to managing and merging specific content from one Git repository into another. Here’s a breakdown of what the script does:

  1. Error Handling: set -e ensures that if any command fails (returns a non-zero status), the script will exit immediately.

  2. Variable Setup: Several variables are set using command-line arguments or defaults, including URLs for repositories, an SSH key location, a filename, and a branch name.

  3. Git Installation Check: Checks if Git is installed on the system and exits if it is not found.

  4. SSH Setup: Initiates the SSH agent, sets the permissions on the SSH key, and adds it to the SSH agent.

  5. Working Directory Navigation: Navigates to a specific working directory.

  6. Clone and Filter ‘og’ Repository: Clones the ‘og’ repository into a temporary directory and filters its history to include only a specific file (FILENAME) in the main branch.

  7. Process ‘canvas’ Repository: If the ‘canvas’ repository doesn’t already exist, it clones it. If there’s a disconnected HEAD, it reconnects it to the main branch.

  8. Merge the Filtered ‘og’ Repo into ‘canvas’: Adds the filtered ‘og’ repository as a remote, fetches its commits, and then merges the filtered branch into a new branch within the ‘canvas’ repository.

  9. Merge Conflict Detection: If a merge conflict is detected, the script returns an error message and exits.

  10. Push to Origin: Pushes the newly created branch to the remote ‘canvas’ repository.

  11. Cleanup: Removes the temporary remote and the temporary directory containing the cloned and filtered ‘og’ repository.

  12. Success Message: Prints a success message.

In summary, this script clones and filters a specific file from the ‘og’ repository, then merges it into a new branch of the ‘canvas’ repository. It includes several error checks and recovery steps, so it is quite robust. It can be used for tasks such as combining documents or code across repositories, preserving the necessary history and relationships.

910. gchist.sh-victory?#

#!/bin/bash

# Define repositories and file
OG_REPO="https://github.com/afecdvi/og.git" # Replace with the actual URL or path to your original repo
CANVAS_REPO="https://github.com/muzaale/canvas.git" # Replace with the actual URL or path to your target repo
FILE_PATH="seasons.docx"

# Create temporary directories and clone the original and target repositories
TEMP_DIR_OG="og_temp_$(date +%s)"
TEMP_DIR_CANVAS="canvas_temp_$(date +%s)"
git clone "$OG_REPO" "$TEMP_DIR_OG"
git clone "$CANVAS_REPO" "$TEMP_DIR_CANVAS"

# Enter the original repository and filter for the file
cd "$TEMP_DIR_OG"
git filter-repo --path "$FILE_PATH" --force
git branch temp_filtered_branch

# Enter the temporary target repository
cd "../$TEMP_DIR_CANVAS"

# Add the temporary directory of the original repository as a remote
git remote add temp_remote "../$TEMP_DIR_OG"

# Fetch the temporary branch
git fetch temp_remote temp_filtered_branch

# Merge the temporary branch into a new branch in the temporary target repository
git checkout -b merge_seasons_docx
git merge temp_remote/temp_filtered_branch --allow-unrelated-histories

# Optionally, push the branch to the actual 'canvas' repository if everything looks good
# git push origin merge_seasons_docx

# Clean up
rm -rf "../$TEMP_DIR_OG"

echo "Processing finished. Check the 'merge_seasons_docx' branch in the temporary 'canvas' repository at $TEMP_DIR_CANVAS."

It looks like the commit message is still not formatted correctly. You need to remove the quotation marks and type your commit message, then save and exit the text editor. Here’s what you can do:

  1. Press the Esc key to ensure you are in normal mode.

  2. Press :, and you’ll see a colon appear at the bottom of the screen.

  3. Type i to switch to insert mode.

  4. Use the arrow keys to navigate to the line with "happy commit", and type your commit message there (e.g., “Merging history of seasons.docx from afecdvi/og to muzaale/canvas”).

  5. Press the Esc key to return to normal mode.

  6. Type :wq to write the file and quit Vim.

  7. Press Enter.

This sequence should allow you to exit the text editor and continue with your git operation. Make sure your commit message is on a new line and not inside quotation marks. If you still have trouble, you can always exit without saving by typing :q! and then use a command-line text editor you’re more comfortable with to make the commit. For example, you can run git commit -m "Your commit message here" in the command line.


(myenv) (base) d@Poseidon 1.ontology % ./gchist.sh                                                                  
Cloning into 'og_temp_1691463414'...
remote: Enumerating objects: 566, done.
remote: Counting objects: 100% (566/566), done.
remote: Compressing objects: 100% (304/304), done.
remote: Total 566 (delta 296), reused 482 (delta 212), pack-reused 0
Receiving objects: 100% (566/566), 3.27 MiB | 11.92 MiB/s, done.
Resolving deltas: 100% (296/296), done.
Cloning into 'canvas_temp_1691463414'...
remote: Enumerating objects: 561, done.
remote: Counting objects: 100% (561/561), done.
remote: Compressing objects: 100% (214/214), done.
remote: Total 561 (delta 312), reused 547 (delta 298), pack-reused 0
Receiving objects: 100% (561/561), 1.90 MiB | 12.42 MiB/s, done.
Resolving deltas: 100% (312/312), done.
Parsed 42 commits
New history written in 0.09 seconds; now repacking/cleaning...
Repacking your repo and cleaning out old unneeded objects
HEAD is now at a19ebeb send this version to fawaz for review
Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 20 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (12/12), done.
Total 12 (delta 2), reused 4 (delta 2), pack-reused 0
Completely finished after 0.26 seconds.
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 12 (delta 2), reused 12 (delta 2), pack-reused 0
Unpacking objects: 100% (12/12), 1.78 MiB | 14.62 MiB/s, done.
From ../og_temp_1691463414
 * branch            temp_filtered_branch -> FETCH_HEAD
 * [new branch]      temp_filtered_branch -> temp_remote/temp_filtered_branch
Switched to a new branch 'merge_seasons_docx'
Merge made by the 'ort' strategy.
Processing finished. Check the 'merge_seasons_docx' branch in the temporary 'canvas' repository at canvas_temp_1691463414.
(myenv) (base) d@Poseidon 1.ontology % 

911. gchist.sh#

  • mon aug 7 11:41 PM

  • imported history of seasons.docx from afecdvi/og to muzaale/canvas

  • this is a proof of concept & tells me i can now do the following:

    1. destroy old repos and extract what i need

    2. create new repos and import what i need

    3. all-round creative-destruction & i’m pleased

    4. now that our workflow has automated both the creation and destruction of repos, we can now focus on the content

    5. i’m going to start by rebuilding fena from scratch

#!/bin/bash

# Repositories and file path
OG_REPO="https://github.com/afecdvi/og" # Replace with the original repo URL
CANVAS_REPO="https://github.com/muzaale/canvas" # Replace with the target repo URL
FILE_PATH="seasons.docx"

# Temporary directories for cloning repositories
TEMP_DIR_OG="og_temp_$(date +%s)"
TEMP_DIR_CANVAS="canvas_temp_$(date +%s)"

# Clone the original and target repositories
git clone "$OG_REPO" "$TEMP_DIR_OG"
git clone "$CANVAS_REPO" "$TEMP_DIR_CANVAS"

# Enter the original repository and filter for the file
cd "$TEMP_DIR_OG"
git filter-repo --path "$FILE_PATH" --force
git branch temp_filtered_branch

# Enter the temporary target repository
cd "../$TEMP_DIR_CANVAS"

# Add the temporary directory of the original repository as a remote
git remote add temp_remote "../$TEMP_DIR_OG"

# Fetch the temporary branch
git fetch temp_remote temp_filtered_branch

# Create a new branch and merge the temporary branch into it
git checkout -b merge_seasons_docx
git merge temp_remote/temp_filtered_branch --allow-unrelated-histories

# Push the new branch to the actual 'canvas' repository
git push origin merge_seasons_docx

# Clean up temporary directories
rm -rf "../$TEMP_DIR_OG" "../$TEMP_DIR_CANVAS"

echo "Processing finished. Check the 'merge_seasons_docx' branch in the 'canvas' repository."

08/08/2023#

912. bmv#

bach
handel
mozart
paulhlang


913. philosophe#

Certainly! It’s clear now that you’re looking to create a compelling pitch to Dr. Ben Philosophe, aligning Fena with something as sensational and zeitgeist-capturing as Facebook’s origins. Here’s a revised summary that weaves those elements together:

Social Media Comparisons#

  1. Facebook: Originated at Harvard, where undergraduates compared and ranked two girls at a time on a “Facepage.” This simple yet captivating concept took the world by storm.

  2. Instagram, TikTok, Yafe: Other platforms with their unique appeals, each capitalizing on specific trends or interests.

Introducing Fena by Fenagas#

Fena, crafted by Fenagas, is poised to become the next sensation in the realm of medical research and knowledge sharing, akin to how Facebook captured the zeitgeist of social connections.

What Sets Fena Apart?#

  • Dynamic Collaboration: A platform tailored for clinical professionals, researchers, students, and enthusiasts, designed to transform how medical findings are communicated.

  • Visual Appeal: Allows users to create visually stunning, interactive websites, capturing the audience’s attention just like Facebook’s original concept.

  • Open and Secure Environment: Champions open access to knowledge while maintaining secure collaboration.

The Power of Unity:#

  • Inclusive Community: Just as every spoke in a wheel propels motion, every contributor in Fena propels medical advancement.

  • Driven by Innovation: Like Facebook’s sensational beginning, Fena aims to redefine the way we explore and present medical research, becoming a catalyst for scientific collaboration.

A Pitch to Dr. Ben Philosophe:#

Dr. Philosophe, as an esteemed abdominal transplant surgeon in our department of surgery, your insights and leadership are invaluable. We invite you to become part of the Fena movement. Picture a platform that brings together clinicians, statisticians, students, and researchers, creating an ecosystem where medical discoveries are not just published but brought to life.

Imagine a community where your groundbreaking surgical techniques and clinical insights are not confined to journals but can reach a global audience, inspiring future surgeons and enhancing patient care.

With Fena, we unite as a community dedicated to breaking down barriers, fostering a global network, and driving innovations in healthcare. We believe that Fena’s potential is as sensational and influential as the phenomenon that was Facebook’s inception. Your involvement could be the catalyst that propels Fena to new heights.

Join us in shaping the future of medical progress, and let’s make a real impact together with Fena.

Conclusion:#

Fena, with its innovative design and community-driven approach, seeks to become a trendsetter in medical research collaboration. Drawing parallels to the sensational impact of Facebook’s origins, Fena is positioned to capture a unique aspect of the current zeitgeist, transforming the medical community’s interaction, collaboration, and presentation of research. Dr. Ben Philosophe’s collaboration could be the key to unlocking Fena’s full potential.

914. workflow9.0#

914.1 directory#

(myenv) (base) d@Poseidon 1.ontology % ls -l
total 152
drwxr-xr-x@  13 d  staff    416 Aug  8 05:19 abi
-rwxr-xr-x@   1 d  staff   2843 Aug  8 05:16 abikesa.sh
drwxr-xr-x@  21 d  staff    672 Aug  4 16:47 alpha
drwxr-xr-x@  15 d  staff    480 Aug  6 17:14 be
drwxr-xr-x@  23 d  staff    736 Aug  7 02:44 beta
drwxr-xr-x@ 280 d  staff   8960 Aug  7 19:59 bloc
-rwxr-xr-x@   1 d  staff   1342 Aug  8 06:36 chandr.sh
drwxr-xr-x@  17 d  staff    544 Aug  7 20:45 de
drwxr-xr-x@  14 d  staff    448 Aug  7 18:30 delta
drwxr-xr-x@ 283 d  staff   9056 Aug  7 11:17 denotas
drwxr-xr-x@  16 d  staff    512 Aug  6 17:16 fe
drwxr-xr-x@  14 d  staff    448 Aug  8 06:32 fena
drwxr-xr-x@  15 d  staff    480 Aug  1 14:43 fenagas
drwxr-xr-x@  16 d  staff    512 Aug  8 04:20 ga
drwxr-xr-x@  14 d  staff    448 Aug  7 18:31 gamma
drwxr-xr-x@  17 d  staff    544 Aug  7 22:35 git-filter-repo
drwxr-xr-x@  14 d  staff    448 Aug  8 05:19 ikesa
drwxr-xr-x@  29 d  staff    928 Jul 20 20:26 libro
drwxr-xr-x@ 144 d  staff   4608 Jun 23 23:20 livre
drwxr-xr-x@  14 d  staff    448 Aug  4 12:21 llc
drwxr-xr-x@  20 d  staff    640 Aug  2 13:18 mb
drwxr-xr-x@   7 d  staff    224 Aug  6 07:33 myenv
drwxr-xr-x@  22 d  staff    704 Aug  4 08:16 og
-rw-r--r--@   1 d  staff    633 Aug  6 02:34 populate_be.ipynb
-rw-r--r--@   1 d  staff  61138 Aug  8 03:40 populate_fe.ipynb
-rwxr-xr-x@   1 d  staff    618 Aug  6 16:20 random.sh
drwxr-xr-x@  15 d  staff    480 Jul 31 01:05 repos
drwxr-xr-x@  18 d  staff    576 Jul 18 10:57 spring
drwxr-xr-x@ 139 d  staff   4448 Jun 25 08:29 summer
drwxr-xr-x@  14 d  staff    448 Jul 31 06:24 track
drwxr-xr-x@  25 d  staff    800 Jul 20 20:21 verano
drwxr-xr-x@  13 d  staff    416 Aug  8 06:32 yafe
(myenv) (base) d@Poseidon 1.ontology % 

914.2 abikesa.sh#

#!/bin/bash

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the name of the populate_be.ipynb file in ROOT_DIR: " POPULATE_BE

# Set up directories and paths; originally gd.sh
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cp $POPULATE_BE $SUBDIR_NAME/intro.ipynb
cd $SUBDIR_NAME

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_${SUBDIR_NAME}${REPO_NAME}"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f $SSH_KEY_PATH
  cat ${SSH_KEY_PATH}.pub
  eval "$(ssh-agent -s)"
  ssh-add $SSH_KEY_PATH
  pbcopy < ${SSH_KEY_PATH}.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("${SUBDIR_NAME}_0" "${SUBDIR_NAME}_1" "${SUBDIR_NAME}_2" "${SUBDIR_NAME}_3" "${SUBDIR_NAME}_4" "${SUBDIR_NAME}_5" 
"${SUBDIR_NAME}_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
  mkdir -p ${acts[$i]}
  for ((j=1; j<=${act_files[$i]}; j++)); do
    cp "intro.ipynb" "${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb"
  done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file # Make sure this file exists
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
  echo "  - caption: Part $(($i + 1))" >> $toc_file
  echo "    chapters:" >> $toc_file
  for ((j=1; j<=${act_files[$i]}; j++)); do
    echo "      - file: ${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb" >> $toc_file
  done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "now have gamma-delta & ga-de to play with"
chmod 600 $SSH_KEY_PATH
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add $SSH_KEY_PATH
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

writing output... [100%] yafe_6/yafe_7_8

914.3 chandr.sh#

#!/bin/bash

# Repositories and file path; originally gchist.sh
OG_REPO="https://github.com/afecdvi/og" # Replace with the original repo URL
CANVAS_REPO="https://github.com/muzaale/canvas" # Replace with the target repo URL
FILE_PATH="seasons.docx"

# Temporary directories for cloning repositories
TEMP_DIR_OG="og_temp_$(date +%s)"
TEMP_DIR_CANVAS="canvas_temp_$(date +%s)"

# Clone the original and target repositories
git clone "$OG_REPO" "$TEMP_DIR_OG"
git clone "$CANVAS_REPO" "$TEMP_DIR_CANVAS"

# Enter the original repository and filter for the file
cd "$TEMP_DIR_OG"
git filter-repo --path "$FILE_PATH" --force
git branch temp_filtered_branch

# Enter the temporary target repository
cd "../$TEMP_DIR_CANVAS"

# Add the temporary directory of the original repository as a remote
git remote add temp_remote "../$TEMP_DIR_OG"

# Fetch the temporary branch
git fetch temp_remote temp_filtered_branch

# Create a new branch and merge the temporary branch into it
git checkout -b merge_seasons_docx
git merge temp_remote/temp_filtered_branch --allow-unrelated-histories

# Push the new branch to the actual 'canvas' repository
git push origin merge_seasons_docx

# Clean up temporary directories
rm -rf "../$TEMP_DIR_OG" "../$TEMP_DIR_CANVAS"

echo "Processing finished. Check the 'merge_seasons_docx' branch in the 'canvas' repository."

915. workfow9.1#


  1. creative: abikesa.sh

#!/bin/bash

# cat ~/.ssh/id_yafefena.pub
# rm  ~/.ssh/id_yafefena.pub  ~/.ssh/id_yafefena 

# Input GitHub username, repository, email address, root directory, subdirectory, and source path
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology): " ROOT_DIR
read -p "Enter the name of the subdirectory to be created within the root directory: " SUBDIR_NAME
read -p "Enter the name of the populate_be.ipynb file in ROOT_DIR: " POPULATE_BE

# Set up directories and paths; originally gd.sh
cd $ROOT_DIR
mkdir -p $SUBDIR_NAME
cp $POPULATE_BE $SUBDIR_NAME/intro.ipynb
cd $SUBDIR_NAME

# Check if SSH keys already exist, and if not, generate a new one
SSH_KEY_PATH="$HOME/.ssh/id_${SUBDIR_NAME}${REPO_NAME}"
if [ ! -f "$SSH_KEY_PATH" ]; then
  ssh-keygen -t ed25519 -C "$EMAIL_ADDRESS" -f $SSH_KEY_PATH
  cat ${SSH_KEY_PATH}.pub
  eval "$(ssh-agent -s)"
  ssh-add $SSH_KEY_PATH
  pbcopy < ${SSH_KEY_PATH}.pub
  echo "SSH public key copied to clipboard. Please add it to your GitHub account's SSH keys."
else
  echo "SSH keys already exist for this repository. Skipping key generation."
fi

# Define arrays for acts and the number of files for each act
acts=("${SUBDIR_NAME}_0" "${SUBDIR_NAME}_1" "${SUBDIR_NAME}_2" "${SUBDIR_NAME}_3" "${SUBDIR_NAME}_4" "${SUBDIR_NAME}_5" 
"${SUBDIR_NAME}_6")
act_files=(2 3 4 5 6 7 8)

# Create Act directories and their corresponding files
for ((i=0; i<${#acts[@]}; i++)); do
  mkdir -p ${acts[$i]}
  for ((j=1; j<=${act_files[$i]}; j++)); do
    cp "intro.ipynb" "${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb"
  done
done

# Create _toc.yml file
toc_file="_toc.yml"
echo "format: jb-book" > $toc_file
echo "root: intro.ipynb" >> $toc_file # Make sure this file exists
echo "title: Play" >> $toc_file
echo "parts:" >> $toc_file

for ((i=0; i<${#acts[@]}; i++)); do
  echo "  - caption: Part $(($i + 1))" >> $toc_file
  echo "    chapters:" >> $toc_file
  for ((j=1; j<=${act_files[$i]}; j++)); do
    echo "      - file: ${acts[$i]}/${SUBDIR_NAME}_$(($i + 1))_$j.ipynb" >> $toc_file
  done
done

# Create _config.yml file
config_file="_config.yml"
echo "title: Your Book Title" > $config_file
echo "author: Your Name" >> $config_file
echo "logo: logo.png" >> $config_file

# Build the book with Jupyter Book
cd ..
jb build $SUBDIR_NAME
git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "jhutrc: yafe,fena"
chmod 600 $SSH_KEY_PATH
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add $SSH_KEY_PATH
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

  1. destructive: chandr.sh

#!/bin/bash

# User-input
read -p "Enter original repo URL (e.g., https://github.com/afecdvi/og): " OG_REPO
read -p "Enter target repo URL (e.g. https://github.com/jhutrc/fena): " CANVAS_REPO
read -p "Enter filename (e.g. seasons.docx): " FILE_PATH
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology): " ROOT_DIR
read -p "Enter your SSH key location (e.g., ~/.ssh/id_yafefena): " SSH_KEY
read -p "Enter your email address for target repo: " GIT_EMAIL

# Expand the tilde if present
SSH_KEY_EXPANDED=$(eval echo $SSH_KEY)

if [ ! -f "$SSH_KEY_EXPANDED" ]; then
  echo "SSH key not found at $SSH_KEY_EXPANDED. Exiting."
  exit 1
fi


# Set working directory
cd "$(eval echo $ROOT_DIR)" || exit 1

# Configure SSH agent
eval "$(ssh-agent -s)"
ssh-add "$SSH_KEY_EXPANDED"

# Expand the tilde if present in ROOT_DIR
ROOT_DIR_EXPANDED=$(eval echo $ROOT_DIR)

# Temporary directories for cloning repositories
TEMP_DIR_OG="$ROOT_DIR_EXPANDED/OG_REPO_temp_$(date +%s)"
TEMP_DIR_CANVAS="$ROOT_DIR_EXPANDED/CANVAS_REPO_temp_$(date +%s)"

# Clone the original and target repositories
git clone "$OG_REPO" "$TEMP_DIR_OG"
git clone "$CANVAS_REPO" "$TEMP_DIR_CANVAS"

# Enter the original repository and filter for the file
cd "$TEMP_DIR_OG"
git filter-repo --path "$FILE_PATH" --force
git branch temp_filtered_branch

# Enter the temporary target repository
cd "$TEMP_DIR_CANVAS"

# Configure git email
git config user.email "$GIT_EMAIL"

# Add the temporary directory of the original repository as a remote
git remote add temp_remote "$TEMP_DIR_OG"

# Fetch the temporary branch
git fetch temp_remote temp_filtered_branch

# Create a new branch and merge the temporary branch into it
git checkout -b merge_$FILE_PATH
git merge temp_remote/temp_filtered_branch --allow-unrelated-histories

# Push the new branch to the actual 'canvas' repository
git push origin merge_$FILE_PATH

# Clean up temporary directories
rm -rf "$TEMP_DIR_OG" "$TEMP_DIR_CANVAS"

echo "Processing finished. Check the 'merge_$FILE_PATH' branch in the $CANVAS_REPO repository."
  1. flexibility: vscode

Certainly! It sounds like you’re dealing with an authentication issue in Git while working in Visual Studio Code. You might be trying to push to a remote repository where your other account, jhutrc, has permission, but your currently authenticated account, jhustata, does not.

Here’s what you can do:

  1. Switch Accounts in the Terminal: You can change your global Git username and email by running these commands in your terminal within VS Code:

    git config --global user.name "jhutrc"
    git config --global user.email "jhutrc@example.com" # Replace with the actual email
    

    If you want to set these just for the current repository (not globally), omit the --global flag.

  2. Re-authenticate with the Correct Account: If it’s an issue with the authentication (e.g., SSH keys or credentials), you might have to re-authenticate using the jhutrc account. Here’s how you might approach this:

    • SSH Key: If you’re using SSH, make sure the SSH key associated with the jhutrc account is added to the ssh-agent:

      ssh-add /path/to/jhutrc-private-key
      
    • HTTPS Credentials: If you’re using HTTPS, you might need to update or remove the stored credentials. You can often do this through the credential manager on your OS. On Windows, this would be the Credential Manager; on macOS, the Keychain Access; and on Linux, it might vary depending on the distribution and desktop environment.

  3. Use VS Code’s Integrated Authentication: Visual Studio Code has Git integration and might prompt you to sign in if authentication is needed. Ensure that you sign in with the correct account (i.e., jhutrc) when prompted.

Remember, you’ll need to replace "jhutrc@example.com" with the actual email address associated with the jhutrc account.

Keep in mind that these changes will affect how Git identifies you in all of your repositories if you use the --global flag. If you only want to make the change for the current repository, navigate to the repository directory in your terminal and run the commands without the --global flag.

916. workflow9.2#

  • some really cool unix stuff

  • and then some .git also

  • fena git-like logo!


# User-defined inputs: this is the game-change #3 (flexible)
# Lookout for #1 (creative) and #2 (destructive)
read -p "Enter your GitHub username: " GITHUB_USERNAME
read -p "Enter your GitHub repository name: " REPO_NAME
read -p "Enter your email address: " EMAIL_ADDRESS
read -p "Enter your root directory (e.g., ~/Dropbox/1f.ἡἔρις,κ/1.ontology): " ROOT_DIR
read -p "Enter the name of the subdirectory to be built within the root directory: " SUBDIR_NAME
read -p "Enter your commit statement " COMMIT_THIS
read -p "Enter your SSH key path (e.g., ~/.ssh/id_yafefena): " SSH_KEY_PATH

# Build the book with Jupyter Book
cd "$(eval echo $ROOT_DIR)"
jb build $SUBDIR_NAME
rm -rf $REPO_NAME

if [ -d "$REPO_NAME" ]; then
  echo "Directory $REPO_NAME already exists. Choose another directory or delete the existing one."
  exit 1
fi

git clone "https://github.com/$GITHUB_USERNAME/$REPO_NAME"
cp -r $SUBDIR_NAME/* $REPO_NAME
cd $REPO_NAME
git add ./*
git commit -m "$COMMIT_THIS"
chmod 600 "$(eval echo $SSH_KEY_PATH)"
git remote -v
ssh-add -D
git remote set-url origin "git@github.com:$GITHUB_USERNAME/$REPO_NAME"
ssh-add "$(eval echo $SSH_KEY_PATH)"
git push -u origin main
ghp-import -n -p -f _build/html

echo "Jupyter Book content updated and pushed to $GITHUB_USERNAME/$REPO_NAME repository!"

917. gmail#

918. daily grind#

fena/
├── Intro/
│   ├── Act I/Project
│   ├── Act II/Challenges
│   ├── Act III/Skills
│   ├── Act IV/Estimation
│   ├── Act V/Inference
│   ├── Epilogue/Dailygrind
│   ├── Gas & Spoke/
│   └── Dramatis Personae/
├── Prologue/Background
├── Act I/Project
│   ├── act1_1.ipynb
│   ├── act1_2.ipynb
│   ├── act1_3.ipynb
│   └── ...
├── Act II/Challenges
│   ├── act2_1.ipynb
│   ├── act2_2.ipynb
│   └── ...
├── Act III/Skills
│   ├── act3_1.ipynb
│   ├── act3_2.ipynb
│   ├── act3_3.ipynb
│   ├── act3_4.ipynb
│   └── act3_5.ipynb
├── Act IV/Estimation
│   ├── act4_1.ipynb
│   ├── act4_2.ipynb
│   ├── act4_3.ipynb
│   ├── act4_4.ipynb
│   ├── act4_5.ipynb
│   └── act4_6.ipynb
├── Act V/Inference
│   ├── act5_1.ipynb
│   ├── act5_2.ipynb
│   ├── act5_3.ipynb
│   ├── act5_4.ipynb
│   ├── act5_5.ipynb
│   └── act5_6.ipynb
├── Epilogue/Dailygrind
│   ├── epi_1.ipynb
│   ├── epi_2.ipynb
│   ├── epi_3.ipynb
│   ├── epi_4.ipynb
│   ├── epi_5.ipynb
│   ├── epi_6.ipynb
│   ├── epi_7.ipynb
│   └── epi_8.ipynb
├── Gas & Spoke/
│   ├── gas_1.ipynb
│   ├── gas_2.ipynb
│   └── gas_3.ipynb
└── Dramatis Personae/
    ├── high_school_students/
       ├── high_school_students_1/
          └── ...
       ├── high_school_students_2/
          └── ...
       ├── high_school_students_3/
          └── ...
       ├── high_school_students_4/
          └── ...
       └── high_school_students_5/
           └── ...
    ├── undergraduates/
       ├── undergraduates_1/
          └── ...
       ├── undergraduates_2/
          └── ...
       ├── undergraduates_3/
          └── ...
       ├── undergraduates_4/
          └── ...
       └── undergraduates_5/
           └── ...
    ├── graduates/
       ├── graduates_1/
          └── ...
       ├── graduates_2/
          └── ...
       ├── graduates_3/
          └── ...
       ├── graduates_4/
          └── ...
       └── graduates_5/
           └── ...
    ├── medical_students/
       ├── medical_students_1/
          └── ...
       ├── medical_students_2/
          └── ...
       ├── medical_students_3/
          └── ...
       ├── medical_students_4/
          └── ...
       └── medical_students_5/
           └── ...
    ├── residents/
       ├── residents_1/
          └── ...
       ├── residents_2/
          └── ...
       ├── residents_3/
          └── ...
       ├── residents_4/
          └── ...
       └── residents_5/
           └── ...
    ├── fellows/
       ├── fellows_1/
          └── ...
       ├── fellows_2/
          └── ...
       ├── fellows_3/
          └── ...
       ├── fellows_4/
          └── ...
       └── fellows_5/
           └── ...
    ├── faculty/
       ├── faculty_1/
          └── ...
       ├── faculty_2/
          └── ...
       ├── faculty_3/
          └── ...
       ├── faculty_4/
          └── ...
       └── faculty_5/
           └── ...
    ├── analysts/
       ├── analysts_1/
          └── ...
       ├── analysts_2/
          └── ...
       ├── analysts_3/
          └── ...
       ├── analysts_4/
          └── ...
       └── analysts_5/
           └── ...
    ├── staff/
       ├── staff_1/
          └── ...
       ├── staff_2/
          └── ...
       ├── staff_3/
          └── ...
       ├── staff_4/
          └── ...
       └── staff_5/
           └── ...
    └── collaborators/
        ├── collaborators_1/
           └── ...
        ├── collaborators_2/
           └── ...
        ├── collaborators_3/
           └── ...
        ├── collaborators_4/
           └── ...
        └── collaborators_5/
            └── ...
Hide code cell source
import networkx as nx
import matplotlib.pyplot as plt

# Set seed for layout
seed = 2 

# Directory structure
structure = {
    "Fena": ["Epilogue", "Project", "Skills", "Dramatis Personae", "Challenges"],
    "Numbers": ["Variance", "R01", "K24", "U01"],
    "Epilogue": ["Open-Science", "Self-Publish", "Peer-Reviewed", "Grants", "Proposals"],
    "Skills": ["Python", "AI", "R", "Stata", "Numbers"],
    "AI": ["ChatGPT", "Co-Pilot"],
    "Project": ["Manuscript", "Code", "Git"],
    "Estimates": ["Nonparametric", "Semiparametric", "Parametric", "Simulation", "Uses/Abuses"],
    "Numbers": ["Estimates", "Variance"],
    "Variance": ["Oneway", "Twoway", "Multivariable", "Hierarchical", "Clinical", "Public"],
    "Dramatis Personae": ["High School Students", "Undergraduates", "Graduate Students", "Medical Students", "Residents", "Fellows", "Faculty", "Analysts", "Staff", "Collaborators", "Graduates"],
    "Challenges": ["Truth", "Rigor", "Error", "Sloppiness", "Fraud", "Learning"],
}

# Gentle colors for children
child_colors = ["lightgreen", "lightpink", "lightyellow",
    'lavender', 'lightcoral', 'honeydew', 'azure','lightblue', 
]

# 'lightsteelblue', 'lightgray', 'mintcream','mintcream', 'azure', 'linen', 'aliceblue', 'lemonchiffon', 'mistyrose'

# List of nodes to color light blue
light_blue_nodes = ["Epilogue", "Skills", "Dramatis Personae", "Project", "Challenges"]

G = nx.Graph()
node_colors = {}


# Function to capitalize the first letter of each word
def capitalize_name(name):
    return ' '.join(word.capitalize() for word in name.split(" "))

# Assign colors to nodes
for i, (parent, children) in enumerate(structure.items()):
    parent_name = capitalize_name(parent.replace("_", " "))
    G.add_node(parent_name)
    
    # Set the color for Fena
    if parent_name == "Fena":
        node_colors[parent_name] = 'lightgray'
    else:
        node_colors[parent_name] = child_colors[i % len(child_colors)]
        
    for child in children:
        child_name = capitalize_name(child.replace("_", " "))
        G.add_edge(parent_name, child_name)
        if child_name in light_blue_nodes:
            node_colors[child_name] = 'lightblue'
        else:
            node_colors[child_name] = child_colors[(i + 6) % len(child_colors)]  # You can customize the logic here to assign colors


colors = [node_colors[node] for node in G.nodes()]

# Set figure size
plt.figure(figsize=(30, 30))

# Draw the graph
pos = nx.spring_layout(G, scale=30, seed=seed)
nx.draw_networkx_nodes(G, pos, node_size=10000, node_color=colors, edgecolors='black')  # Boundary color set here
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(G, pos, font_size=20)
plt.show()
_images/185f4dfc8fee4cd6d6e20986bbaaf1fe8f70197fc303d67a79b30aa7a1578933.png

08/10/2023#

919. IV-maj7-(9)#

  • Diatonic chord IV of A flat major: D flat major seventh (9th)

  • Opening tension: 9th (E flat) in the melody (soprano) points to the ultimate key of song

  • I’m talking On bended knee by Jam & Lewis and performed by Boys-II-Men 17:25/43:42

920. kevinbond#

  • you are god alone - with marvin sapp

  • ultra-clean, innovative, yet sophisticated piano technique

  • when i think of gospel piano, i think of kevin bond

gospel/
├── andrae crouch/
│   ├── the winans
│   ├── fred hammond
│   ├── commissioned
│   ├── marvin sapp
│   └── kirk franklin/
│   └── ...
├── kevin bond/
│   ├── aaron lindsey
│   └── ...
└── contemporary/
    ├── music director/
       ├── marvin sapp/
          └── ...
       ├── yolanda adams/
          └── ...
       └── etc/
           └── ...
       └── staff_5/
           └── ...
    └── session musician/
        ├── warryn campbell/
           └── mary mary
           └── ...
        └── cedric caldwell/
            └── cece winans

921. counterfeiting#

Simulating data for a Cox regression model that reflects specific hazard ratios and correlation structures is a complex task. We’ll need to:

  1. Define a covariance structure between the predictors.

  2. Simulate survival times based on a specified hazard function, incorporating the given hazard ratios.

First, we’ll create the dataset with the defined means and standard deviations, and a reasonable covariance structure between continuous variables. Then, we’ll define a hazard function using the given hazard ratios, and finally, generate survival times and censoring indicators.

Here’s a full code snippet:

Hide code cell source
import numpy as np
import pandas as pd
from scipy.stats import multivariate_normal

# Constants
N = 1000

# Means and covariance matrix for continuous variables (age, SBP, SCr, BMI, HbA1c)
mean_cont = [40, 124, 1, 27, 6]
cov_matrix = [
    [25, 5, 0.01, 2, 0.1],
    [5, 121, 0.02, 4, 0.2],
    [0.01, 0.02, 0.0004, 0.01, 0.001],
    [2, 4, 0.01, 25, 0.2],
    [0.1, 0.2, 0.001, 0.2, 0.64]
]
cont_vars = multivariate_normal.rvs(mean=mean_cont, cov=cov_matrix, size=N)

# Simulating categorical variables (Race, Education) and binary variables (Diabetes, Hypertension, Smoke, Male)
race = np.random.choice([0, 1, 2, 3, 4], N, p=[0.37, 0.23, 0.23, 0.13, 0.04])
education = np.random.choice([0, 1, 2, 3], N, p=[0.16, 0.42, 0.22, 0.20])
diabetes = np.random.choice([0, 1], N, p=[0.88, 0.12])
hypertension = np.random.choice([0, 1], N, p=[0.69, 0.31])
smoke = np.random.choice([0, 1], N, p=[0.43, 0.57])
male = np.random.choice([0, 1], N, p=[0.5, 0.5]) # Assuming a 50-50 split

# Hazard function incorporating the given hazard ratios
def hazard_function(x):
    age, race, male, diabetes, hypertension, uacr, egfr, sbp, smoke = x
    hr = 0.5*age + [1, 3.2, 4, 0.7, 1.1][race] + 1.2*male + 5.2*diabetes + 1.0*hypertension + 4.0*uacr + 2.7*egfr + 2.3*sbp + 1.8*smoke
    return hr

# Simulating time to event (kidney failure) based on the hazard function
time_to_failure = np.zeros(N)
status = np.zeros(N)
for i in range(N):
    x = (cont_vars[i, 0], race[i], male[i], diabetes[i], hypertension[i], cont_vars[i, 2], cont_vars[i, 3], cont_vars[i, 1], smoke[i])
    hr = hazard_function(x)
    time_to_failure[i] = np.random.exponential(30/hr)
    status[i] = time_to_failure[i] < 30

# Combine all variables into DataFrame
data = np.column_stack([cont_vars, diabetes, hypertension, smoke, race, education, male, time_to_failure, status])
columns = ['age', 'SBP', 'SCr', 'BMI', 'HbA1c', 'Diabetes', 'Hypertension', 'Smoke', 'Race', 'Education', 'Male', 'Time_to_Kidney_Failure', 'Status']
df = pd.DataFrame(data, columns=columns)
df['Race'] = df['Race'].astype(int).map({0: 'White', 1: 'Black', 2: 'Hispanic', 3: 'Asian', 4: 'Other'})
df['Education'] = df['Education'].astype(int).map({0: 'K-8', 1: 'High School', 2: 'Some college', 3: 'College'})

# Save to CSV
csv_file = 'simulated_data.csv'
df.to_csv(csv_file, index=False)
print(f"Saved dataset to {csv_file}")

# Print summaries
print(df['Time_to_Kidney_Failure'].describe())
print(df['Status'].value_counts())
Saved dataset to simulated_data.csv
count    1000.000000
mean        0.078462
std         0.078522
min         0.000042
25%         0.021140
50%         0.053919
75%         0.107087
max         0.496502
Name: Time_to_Kidney_Failure, dtype: float64
Status
1.0    1000
Name: count, dtype: int64

Please note that the given hazard ratios and the hazard function here are simplified, and this simulation might not accurately represent real-world relationships. Adjusting the means, variances, and covariances to match real-world data and expert opinions would be necessary to obtain a realistic simulation. Additionally, the hazard function used here is a simplification. A more complex and appropriate model could be used to fit the hazard ratios more accurately. Consulting with a domain expert would be a crucial step in creating a realistic simulation.

  1. Initial attempt

capture log close
cd ~/desktop
log using simulated_data.log, replace 
import delimited "simulated_data.csv", clear 
g time = time_to_kidney_failure * 100
stset time, fail(status)
#delimit ;
sts graph, 
    fail 
	per(100) 
	ylab(,
	    format(%2.0f)) 
	yti("%",
	    orientation(horizontal)) 
	xti("Years")
	ti("")
	tmax(30)
; 
#delimit cr
graph export simulated_data.png, replace 
encode race,g(racecat)
encode education,g(educat)
stcox age sbp scr bmi hba1c hypertension smoke i.racecat i.educat male
matrix list e(b)
matrix list e(V)
log close 
---------------------------------------------------------------------------------------------------------------
      name:  <unnamed>
       log:  /Users/d/Desktop/simulated_data.log
  log type:  text
 opened on:  10 Aug 2023, 08:18:38

. import delimited "simulated_data.csv", clear 
(encoding automatically selected: ISO-8859-1)
(13 vars, 1,000 obs)

. g time = time_to_kidney_failure * 100

. stset time, fail(status)

Survival-time data settings

         Failure event: status!=0 & status<.
Observed time interval: (0, time]
     Exit on or before: failure

--------------------------------------------------------------------------
      1,000  total observations
          0  exclusions
--------------------------------------------------------------------------
      1,000  observations remaining, representing
      1,000  failures in single-record/single-failure data
  7,846.243  total analysis time at risk and under observation
                                                At risk from t =         0
                                     Earliest observed entry t =         0
                                          Last observed exit t =  49.65025

. #delimit ;
delimiter now ;
. sts graph, 
>     fail 
>         per(100) 
>         ylab(,
>             format(%2.0f)) 
>         yti("%",
>             orientation(horizontal)) 
>         xti("Years")
>         ti("")
>         tmax(30)
> ;

        Failure _d: status
  Analysis time _t: time

.  #delimit cr
delimiter now cr
. graph export simulated_data.png, replace 
file /Users/d/Desktop/simulated_data.png saved as PNG format

. encode race,g(racecat)

. encode education,g(educat)

. stcox age sbp scr bmi hba1c hypertension smoke i.racecat i.educat male

        Failure _d: status
  Analysis time _t: time

Iteration 0:  Log likelihood = -5912.1282
Iteration 1:  Log likelihood = -5900.6377
Iteration 2:  Log likelihood = -5900.6298
Iteration 3:  Log likelihood = -5900.6298
Refining estimates:
Iteration 0:  Log likelihood = -5900.6298

Cox regression with no ties

No. of subjects =      1,000                            Number of obs =  1,000
No. of failures =      1,000
Time at risk    = 7,846.2435
                                                        LR chi2(15)   =  23.00
Log likelihood = -5900.6298                             Prob > chi2   = 0.0842

-------------------------------------------------------------------------------
           _t | Haz. ratio   Std. err.      z    P>|z|     [95% conf. interval]
--------------+----------------------------------------------------------------
          age |   1.000975   .0067914     0.14   0.886     .9877519    1.014374
          sbp |   1.006517   .0029668     2.20   0.028     1.000719    1.012349
          scr |   .8907616   1.431216    -0.07   0.943     .0382039    20.76897
          bmi |   .9961995   .0066147    -0.57   0.566      .983319    1.009249
        hba1c |   1.147071   .0477179     3.30   0.001     1.057257    1.244515
 hypertension |   1.100774   .0780406     1.35   0.176     .9579687    1.264868
        smoke |   1.026346   .0659869     0.40   0.686     .9048311     1.16418
              |
      racecat |
       Black  |   .9081203   .1005787    -0.87   0.384     .7309181    1.128283
    Hispanic  |   .8198603    .089963    -1.81   0.070     .6612075    1.016581
       Other  |   1.052135   .1928423     0.28   0.782     .7346115    1.506904
       White  |   .8901913   .0882279    -1.17   0.241     .7330267    1.081053
              |
       educat |
 High School  |   .9583789   .0816347    -0.50   0.618     .8110207    1.132511
         K-8  |   1.050765   .1133652     0.46   0.646     .8504933    1.298196
Some college  |   .9902475   .0914515    -0.11   0.915     .8262919    1.186736
              |
         male |   1.005739   .0646294     0.09   0.929     .8867199    1.140733
-------------------------------------------------------------------------------

. matrix list e(b)

e(b)[1,17]
                                                                                                     
             age           sbp           scr           bmi         hba1c  hypertension         smoke
y1     .00097419     .00649586    -.11567843    -.00380771     .13721187      .0960137     .02600498

              1b.            2.            3.            4.            5.           1b.            2.
         racecat       racecat       racecat       racecat       racecat        educat        educat
y1             0    -.09637839    -.19862137     .05082174     -.1163189             0    -.04251211

               3.            4.              
          educat        educat          male
y1     .04951838    -.00980032      .0057224

. matrix list e(V)

symmetric e(V)[17,17]
                                                                                                               
                       age           sbp           scr           bmi         hba1c  hypertension         smoke
         age     .00004603
         sbp    -1.981e-06     8.688e-06
         scr    -.00020263    -.00021484       2.58159
         bmi    -5.726e-06    -1.049e-06    -.00055839     .00004409
       hba1c     5.077e-06    -2.916e-06     -.0053437    -3.095e-06     .00173054
hypertension    -8.904e-06     9.251e-06      .0003493    -.00002983    -.00001066     .00502626
       smoke     5.508e-06    -4.123e-06     .00230588       .000011    -.00008874     .00014561     .00413359
  1b.racecat             0             0             0             0             0             0             0
   2.racecat     .00001443    -9.779e-06      .0049377    -.00004861     .00007833     .00034173    -.00014829
   3.racecat    -.00001571    -.00002631     .00689821    -.00002854     .00017538     .00005717    -.00007607
   4.racecat    -.00004832     9.913e-06      .0135565     .00001269     .00033953    -.00055507     .00033795
   5.racecat    -4.978e-06    -.00001205     .00911722    -.00003206     .00013383    -.00001345     .00019594
   1b.educat             0             0             0             0             0             0             0
    2.educat    -6.130e-06     5.540e-06    -.00257128    -.00001612     7.235e-06    -.00009945     .00009233
    3.educat    -.00002173    -.00001797     .00055141     6.090e-06     .00010844    -.00001648    -.00009682
    4.educat     4.749e-06     3.198e-06     .00407254     .00001789     .00008545    -.00005916     .00009838
        male    -.00001972     .00001925     .00137265     -.0000243     .00005277     .00022501    -.00003388

                        1b.            2.            3.            4.            5.           1b.            2.
                   racecat       racecat       racecat       racecat       racecat        educat        educat
  1b.racecat             0
   2.racecat             0     .01226663
   3.racecat             0     .00734394      .0120406
   4.racecat             0     .00710681     .00727447     .03359399
   5.racecat             0     .00723199     .00731763      .0072888     .00982301
   1b.educat             0             0             0             0             0             0
    2.educat             0     -.0004988    -.00068961    -.00166119    -.00054906             0     .00725562
    3.educat             0     -.0006126     -.0009112    -.00064009    -.00082472             0     .00452639
    4.educat             0    -.00061023    -.00050447    -.00043566    -.00044294             0     .00452582
        male             0     .00033395     .00035275    -.00039819      .0000884             0     .00024663

                         3.            4.              
                    educat        educat          male
    3.educat     .01163988
    4.educat      .0045199     .00852891
        male     .00016174     .00028983     .00412943

. log close 
      name:  <unnamed>
       log:  /Users/d/Desktop/simulated_data.log
  log type:  text
 closed on:  10 Aug 2023, 08:18:40
---------------------------------------------------------------------------------------------------------------

  1. Second attempt

It seems like you are dealing with a statistical analysis using a Cox regression model and you’d like to generate simulated data that has specific hazard ratios (HRs) for various covariates. From your Stata output, you have noticed that most HRs are close to 1, and you want them to be greater than 2.

The most common way to generate synthetic data that meets specific criteria is to start by defining the true underlying model from which the data are generated. You can then simulate data from that model, ensuring that the simulated data have the properties you want.

Here’s a step-by-step guide that you can follow in Python, using packages like NumPy and SciPy, to generate the desired simulated data.

  1. Import Necessary Libraries: You’ll need to import some libraries to create and manipulate the data.

import numpy as np
import pandas as pd
from scipy.stats import expon
  1. Define the Baseline Hazard Function: The Cox model assumes that the hazard function is a product of a baseline hazard and an exponential function of the covariates. Here, you can define a simple baseline hazard using an exponential distribution.

  2. Define the Covariate Structure: You’ll want to create the covariates based on the HRs you specified. Here’s an example for age, but you’ll want to do this for all your covariates.

n = 1000 # Number of observations

# Define age with a desired HR of 2.7 for eGFR
age = np.random.normal(60, 10, n)
eGFR_effect = 2.7

# Other covariates
# ...

# Constructing the hazard ratios
hazard_ratios = np.exp(eGFR_effect * (age - np.mean(age)))
# ...

# Other HRs here, multiplying them to get the total HR
# ...
  1. Generate Survival Times: Now, you can generate the survival times using the baseline hazard and the hazard ratios. Here’s how you can do that using an exponential distribution.

baseline_hazard = 0.01
event_times = expon.rvs(scale=1/(baseline_hazard * hazard_ratios))
  1. Generate Censoring Times: Often in survival data, you’ll also have censoring. Here’s how you could generate censoring times and then use them to create the observed survival times and censoring indicators.

censoring_times = expon.rvs(scale=1/baseline_hazard, size=n)
observed_times = np.minimum(event_times, censoring_times)
status = event_times <= censoring_times
  1. Create DataFrame: Finally, you can create a DataFrame containing the covariates, observed times, and censoring status.

df = pd.DataFrame({
    'age': age,
    # Other covariates here...
    'time_to_kidney_failure': observed_times,
    'status': status
})
  1. Export the DataFrame: You can then export this DataFrame to a CSV file and use it in your Stata code.

df.to_csv('simulated_data.csv', index=False)

This process will generate data that have the hazard ratios you want. You’ll need to repeat steps 3-5 for each of the covariates, using the hazard ratios you specified. You’ll also likely need to tweak the distributions and parameters to get something that looks realistic for your particular setting. By modifying the parameters and distributions, you can generate simulated data that fits your desired HRs.


Certainly! To produce hazard ratios (HRs) that are greater than 2, we’ll need to modify both the coefficient matrix and the variance-covariance matrix. Below, I’ll provide a script that achieves this by simulating data and running a Cox regression analysis. The script will first generate data, ensuring that the coefficients yield HRs greater than 2, and then perform the analysis using the Stata commands you provided earlier.

import numpy as np
import pandas as pd

# Function to simulate data
def simulate_data(n=1000):
    np.random.seed(42)
    age = np.random.normal(60, 10, n)
    sbp = np.random.normal(120, 15, n)
    scr = np.random.normal(1, 0.5, n)
    bmi = np.random.normal(25, 5, n)
    hba1c = np.random.normal(5, 1, n)
    hypertension = np.random.randint(2, size=n)
    smoke = np.random.randint(2, size=n)
    racecat = np.random.choice(['Black', 'Hispanic', 'Other', 'White'], n)
    educat = np.random.choice(['High School', 'K-8', 'Some college'], n)
    male = np.random.randint(2, size=n)
    
    # Coefficients to generate HRs greater than 2
    beta = np.array([0.7, 0.8, 0.9, 0.6, 0.7, 0.8, 0.7, -0.6, -0.5, -0.4, -0.3, -0.7, -0.6, -0.5, -0.4, 0.7])
    
    # Simulate the linear predictor
    linear_pred = (beta[0] * age + beta[1] * sbp + beta[2] * scr + beta[3] * bmi +
                   beta[4] * hba1c + beta[5] * hypertension + beta[6] * smoke +
                   beta[7] * (racecat == 'Black') + beta[8] * (racecat == 'Hispanic') +
                   beta[9] * (racecat == 'Other') + beta[10] * (racecat == 'White') +
                   beta[11] * (educat == 'High School') + beta[12] * (educat == 'K-8') +
                   beta[13] * (educat == 'Some college') + beta[14] * male)
    
    # Simulate time to kidney failure
    time_to_kidney_failure = np.random.exponential(scale=np.exp(-linear_pred))
    
    # Create status indicator (all failed)
    status = np.ones(n, dtype=int)
    
    # Create a DataFrame
    df = pd.DataFrame({
        'age': age,
        'sbp': sbp,
        'scr': scr,
        'bmi': bmi,
        'hba1c': hba1c,
        'hypertension': hypertension,
        'smoke': smoke,
        'race': racecat,
        'education': educat,
        'male': male,
        'time_to_kidney_failure': time_to_kidney_failure,
        'status': status
    })
    
    return df

# Simulate the data
df = simulate_data()

# Save to CSV
df.to_csv("simulated_data.csv", index=False)

# Stata commands
stata_commands = """
capture log close
cd ~/desktop
log using simulated_data.log, replace 
import delimited "simulated_data.csv", clear 
g time = time_to_kidney_failure * 100
stset time, fail(status)
#delimit ;
sts graph, 
    fail 
    per(100) 
    ylab(,
        format(%2.0f)) 
    yti("%",
        orientation(horizontal)) 
    xti("Years")
    ti("")
    tmax(30)
; 
#delimit cr
graph export simulated_data.png, replace 
encode race,g(racecat)
encode education,g(educat)
stcox age sbp scr bmi hba1c hypertension smoke i.racecat i.educat male
matrix list e(b)
matrix list e(V)
log close
"""

# Print the Stata commands to run
print(stata_commands)

The above script first simulates data that should produce hazard ratios (HRs) greater than 2 for the given variables and then provides the Stata commands needed to perform the analysis.

You can copy the Stata commands from the script and run them in Stata to perform the Cox regression analysis and obtain the results. Make sure to place the generated “simulated_data.csv” file in the appropriate location on your desktop before running the Stata commands.

Hide code cell source
import numpy as np
import pandas as pd
from scipy.stats import multivariate_normal

# Function to simulate data
def simulate_data(n=1000):
    np.random.seed(42)
    age = np.random.normal(60, 10, n)
    sbp = np.random.normal(120, 15, n)
    scr = np.random.normal(1, 0.5, n)
    bmi = np.random.normal(25, 5, n)
    hba1c = np.random.normal(5, 1, n)
    hypertension = np.random.randint(2, size=n)
    smoke = np.random.randint(2, size=n)
    racecat = np.random.choice(['Black', 'Hispanic', 'Other', 'White'], n)
    educat = np.random.choice(['High School', 'K-8', 'Some college'], n)
    male = np.random.randint(2, size=n)
    
    # Coefficients to generate HRs greater than 2
    beta = np.array([0.7, 0.8, 0.9, 0.6, 0.7, 0.8, 0.7, -0.6, -0.5, -0.4, -0.3, -0.7, -0.6, -0.5, -0.4, 0.7])
    
    # Simulate the linear predictor
    linear_pred = (beta[0] * age + beta[1] * sbp + beta[2] * scr + beta[3] * bmi +
                   beta[4] * hba1c + beta[5] * hypertension + beta[6] * smoke +
                   beta[7] * (racecat == 'Black') + beta[8] * (racecat == 'Hispanic') +
                   beta[9] * (racecat == 'Other') + beta[10] * (racecat == 'White') +
                   beta[11] * (educat == 'High School') + beta[12] * (educat == 'K-8') +
                   beta[13] * (educat == 'Some college') + beta[14] * male)
    
    # Simulate time to kidney failure
    time_to_kidney_failure = np.random.exponential(scale=np.exp(-linear_pred))
    
    # Create status indicator (all failed)
    status = np.ones(n, dtype=int)
    
    # Create a DataFrame
    df = pd.DataFrame({
        'age': age,
        'sbp': sbp,
        'scr': scr,
        'bmi': bmi,
        'hba1c': hba1c,
        'hypertension': hypertension,
        'smoke': smoke,
        'race': racecat,
        'education': educat,
        'male': male,
        'time_to_kidney_failure': time_to_kidney_failure,
        'status': status
    })
    
    return df

# Simulate the data
df = simulate_data()

# Save to CSV
df.to_csv("simulated_data_2.csv", index=False)

# Stata commands
stata_commands = """
capture log close
cd ~/desktop
log using simulated_data.log, replace 
import delimited "simulated_data.csv", clear 
g time = time_to_kidney_failure * 100
stset time, fail(status)
#delimit ;
sts graph, 
    fail 
    per(100) 
    ylab(,
        format(%2.0f)) 
    yti("%",
        orientation(horizontal)) 
    xti("Years")
    ti("")
    tmax(30)
; 
#delimit cr
graph export simulated_data.png, replace 
encode race,g(racecat)
encode education,g(educat)
stcox age sbp scr bmi hba1c hypertension smoke i.racecat i.educat male
matrix list e(b)
matrix list e(V)
log close
"""

# Print the Stata commands to run
print(stata_commands)

# Print summaries
print(df.describe())
  1. Now a third attempt in Python

Certainly! Below, I’ll continue the Python script by performing the Cox regression analysis using the lifelines package. We’ll fit the Cox Proportional Hazards model to the simulated data, display the HRs, 95% CIs, coefficient matrix, and variance-covariance matrix.

If you don’t have the lifelines library installed, you can install it using:

pip install lifelines

And here is the continuation of the Python script:

Hide code cell source
# !pip install lifelines

import pandas as pd
from lifelines import CoxPHFitter

# Convert categorical variables to dummies
df_dummies = pd.get_dummies(df, columns=['race', 'education'], drop_first=True)

# Instantiate the Cox Proportional Hazards model
cph = CoxPHFitter()

# Fit the model to the data
cph.fit(df_dummies, duration_col='time_to_kidney_failure', event_col='status')

# Print the summary table, which includes HRs and 95% CIs
cph.print_summary()

# Coefficient matrix (log hazard ratios)
coefficients = cph.params_
print("Coefficient Matrix:")
print(coefficients)

# Variance-covariance matrix
variance_covariance_matrix = cph.variance_matrix_
print("Variance-Covariance Matrix:")
print(variance_covariance_matrix)
 
Collecting lifelines
  Using cached lifelines-0.27.7-py3-none-any.whl (409 kB)
Requirement already satisfied: numpy>=1.14.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (1.25.2)
Requirement already satisfied: scipy>=1.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (1.11.1)
Requirement already satisfied: pandas>=1.0.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (2.0.3)
Requirement already satisfied: matplotlib>=3.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (3.7.2)
Collecting autograd>=1.5 (from lifelines)
  Obtaining dependency information for autograd>=1.5 from https://files.pythonhosted.org/packages/81/70/d5c7c2a458b8be96495c8b1634c2155beab58cbe864b7a9a5c06c2e52520/autograd-1.6.2-py3-none-any.whl.metadata
  Downloading autograd-1.6.2-py3-none-any.whl.metadata (706 bytes)
Collecting autograd-gamma>=0.3 (from lifelines)
  Using cached autograd_gamma-0.5.0-py3-none-any.whl
Collecting formulaic>=0.2.2 (from lifelines)
  Obtaining dependency information for formulaic>=0.2.2 from https://files.pythonhosted.org/packages/db/97/2ac97273f77138c4248ab63cdc4799ea3c87ea2be2d28bb726562e0d0827/formulaic-0.6.4-py3-none-any.whl.metadata
  Downloading formulaic-0.6.4-py3-none-any.whl.metadata (6.0 kB)
Collecting future>=0.15.2 (from autograd>=1.5->lifelines)
  Using cached future-0.18.3-py3-none-any.whl
Collecting astor>=0.8 (from formulaic>=0.2.2->lifelines)
  Using cached astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting interface-meta>=1.2.0 (from formulaic>=0.2.2->lifelines)
  Using cached interface_meta-1.3.0-py3-none-any.whl (14 kB)
Collecting typing-extensions>=4.2.0 (from formulaic>=0.2.2->lifelines)
  Obtaining dependency information for typing-extensions>=4.2.0 from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata
  Downloading typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB)
Collecting wrapt>=1.0 (from formulaic>=0.2.2->lifelines)
  Downloading wrapt-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl (35 kB)
Requirement already satisfied: contourpy>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (4.42.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (23.1)
Requirement already satisfied: pillow>=6.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (10.0.0)
Requirement already satisfied: pyparsing<3.1,>=2.3.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from pandas>=1.0.0->lifelines) (2023.3)
Requirement already satisfied: tzdata>=2022.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from pandas>=1.0.0->lifelines) (2023.3)
Requirement already satisfied: six>=1.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib>=3.0->lifelines) (1.16.0)
Downloading autograd-1.6.2-py3-none-any.whl (49 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.3/49.3 kB 1.5 MB/s eta 0:00:00
?25hDownloading formulaic-0.6.4-py3-none-any.whl (88 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.9/88.9 kB 2.9 MB/s eta 0:00:00
?25hUsing cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Installing collected packages: wrapt, typing-extensions, interface-meta, future, astor, autograd, formulaic, autograd-gamma, lifelines
Successfully installed astor-0.8.1 autograd-1.6.2 autograd-gamma-0.5.0 formulaic-0.6.4 future-0.18.3 interface-meta-1.3.0 lifelines-0.27.7 typing-extensions-4.7.1 wrapt-1.15.0
model lifelines.CoxPHFitter
duration col 'time_to_kidney_failure'
event col 'status'
baseline estimation breslow
number of observations 1000
number of events observed 1000
partial log-likelihood -3463.10
time fit was run 2023-08-10 12:52:30 UTC
coef exp(coef) se(coef) coef lower 95% coef upper 95% exp(coef) lower 95% exp(coef) upper 95% cmp to z p -log2(p)
age 0.71 2.04 0.02 0.68 0.75 1.97 2.12 0.00 38.36 <0.005 inf
sbp 0.82 2.27 0.02 0.78 0.86 2.18 2.37 0.00 38.71 <0.005 inf
scr 0.83 2.29 0.07 0.69 0.97 1.99 2.64 0.00 11.40 <0.005 97.53
bmi 0.63 1.87 0.02 0.59 0.66 1.81 1.93 0.00 35.88 <0.005 934.01
hba1c 0.71 2.03 0.04 0.63 0.78 1.88 2.19 0.00 18.49 <0.005 251.26
hypertension 0.81 2.25 0.07 0.67 0.94 1.96 2.57 0.00 11.71 <0.005 102.82
smoke 0.75 2.12 0.07 0.62 0.89 1.85 2.42 0.00 10.89 <0.005 89.29
male -0.36 0.70 0.07 -0.49 -0.23 0.61 0.80 0.00 -5.30 <0.005 23.06
race_Hispanic 0.06 1.07 0.09 -0.12 0.25 0.89 1.28 0.00 0.68 0.50 1.01
race_Other 0.20 1.22 0.09 0.02 0.38 1.02 1.46 0.00 2.21 0.03 5.20
race_White 0.39 1.48 0.09 0.21 0.57 1.23 1.77 0.00 4.21 <0.005 15.29
education_K-8 0.13 1.14 0.08 -0.03 0.29 0.97 1.34 0.00 1.57 0.12 3.10
education_Some college 0.08 1.09 0.08 -0.07 0.24 0.93 1.27 0.00 1.07 0.29 1.80

Concordance 0.97
Partial AIC 6952.20
log-likelihood ratio test 4898.05 on 13 df
-log2(p) of ll-ratio test inf
Coefficient Matrix:
covariate
age                       0.714595
sbp                       0.821749
scr                       0.828384
bmi                       0.625474
hba1c                     0.708745
hypertension              0.808728
smoke                     0.750394
male                     -0.357358
race_Hispanic             0.063952
race_Other                0.200362
race_White                0.388712
education_K-8             0.129548
education_Some college    0.084941
Name: coef, dtype: float64
Variance-Covariance Matrix:
covariate                    age       sbp       scr       bmi     hba1c  \
covariate                                                                  
age                     0.000347  0.000388  0.000387  0.000297  0.000339   
sbp                     0.000388  0.000451  0.000457  0.000342  0.000389   
scr                     0.000387  0.000457  0.005284  0.000353  0.000253   
bmi                     0.000297  0.000342  0.000353  0.000304  0.000299   
hba1c                   0.000339  0.000389  0.000253  0.000299  0.001469   
hypertension            0.000372  0.000429  0.000601  0.000297  0.000388   
smoke                   0.000332  0.000377  0.000436  0.000300  0.000471   
male                   -0.000188 -0.000234 -0.000465 -0.000186 -0.000264   
race_Hispanic           0.000023  0.000065  0.000226  0.000053  0.000103   
race_Other              0.000065  0.000108  0.000345  0.000060  0.000141   
race_White              0.000167  0.000232  0.000405  0.000159  0.000263   
education_K-8           0.000091  0.000109  0.000325  0.000150  0.000109   
education_Some college  0.000067  0.000088  0.000527  0.000103  0.000028   

covariate               hypertension     smoke      male  race_Hispanic  \
covariate                                                                 
age                         0.000372  0.000332 -0.000188       0.000023   
sbp                         0.000429  0.000377 -0.000234       0.000065   
scr                         0.000601  0.000436 -0.000465       0.000226   
bmi                         0.000297  0.000300 -0.000186       0.000053   
hba1c                       0.000388  0.000471 -0.000264       0.000103   
hypertension                0.004769  0.000320 -0.000152      -0.000171   
smoke                       0.000320  0.004750 -0.000020       0.000435   
male                       -0.000152 -0.000020  0.004543       0.000387   
race_Hispanic              -0.000171  0.000435  0.000387       0.008820   
race_Other                  0.000031  0.000075  0.000352       0.003866   
race_White                  0.000346  0.000288  0.000451       0.003937   
education_K-8               0.000163  0.000311  0.000288       0.000015   
education_Some college      0.000314  0.000227  0.000162       0.000043   

covariate               race_Other  race_White  education_K-8  \
covariate                                                       
age                       0.000065    0.000167       0.000091   
sbp                       0.000108    0.000232       0.000109   
scr                       0.000345    0.000405       0.000325   
bmi                       0.000060    0.000159       0.000150   
hba1c                     0.000141    0.000263       0.000109   
hypertension              0.000031    0.000346       0.000163   
smoke                     0.000075    0.000288       0.000311   
male                      0.000352    0.000451       0.000288   
race_Hispanic             0.003866    0.003937       0.000015   
race_Other                0.008240    0.003952      -0.000592   
race_White                0.003952    0.008506      -0.000145   
education_K-8            -0.000592   -0.000145       0.006811   
education_Some college   -0.000198    0.000117       0.003330   

covariate               education_Some college  
covariate                                       
age                                   0.000067  
sbp                                   0.000088  
scr                                   0.000527  
bmi                                   0.000103  
hba1c                                 0.000028  
hypertension                          0.000314  
smoke                                 0.000227  
male                                  0.000162  
race_Hispanic                         0.000043  
race_Other                           -0.000198  
race_White                            0.000117  
education_K-8                         0.003330  
education_Some college                0.006358  

The cph.print_summary() method will print the summary table that includes HRs (under the exp(coef) column) and 95% CIs for each variable in the model.

The coefficient matrix is displayed as log hazard ratios (which can be converted to HRs by taking the exponential), and the variance-covariance matrix is also printed.

By running this script, you will obtain the required results using Python, without needing to switch to Stata.

from lifelines import KaplanMeierFitter
import matplotlib.pyplot as plt

# Instantiate the Kaplan-Meier fitter
kmf = KaplanMeierFitter()

# Fit the Kaplan-Meier curve to the data
kmf.fit(df['time_to_kidney_failure'], event_observed=df['status'])

# Kaplan-Meier curve
plt.figure(figsize=(10, 5))

plt.subplot(1, 2, 1)
plt.step(kmf.survival_function_.index, kmf.survival_function_['KM_estimate'], where='post')
plt.title('Kaplan-Meier Survival Curve')
plt.ylabel('Probability of Survival')
plt.xlabel('Time')

# Failure function (1 - KM)
plt.subplot(1, 2, 2)
plt.step(kmf.survival_function_.index, 1 - kmf.survival_function_['KM_estimate'], where='post')
plt.title('Failure Function (1 - KM)')
plt.ylabel('Cumulative Probability of Failure')
plt.xlabel('Time')

plt.show()
_images/0df1c51b37386527d3dd20e920bd4074327b8c6e73c7a35dc52ed284466eb662.png

It seems like the survival function is behaving unexpectedly, jumping straight from 1 to 0 at time 0. This may indicate an issue with the data itself. It might be helpful to look at the summary statistics of the time-to-event and status columns, and perhaps plot a histogram to visualize the distribution of the time-to-event variable.

It’s possible that all the events are coded as failures at time 0, or that there are other data-related issues.

Here’s how you can investigate:

  1. Check the Summary Statistics: Print out the basic statistics of the time-to-event column.

print(df['time_to_kidney_failure'].describe())
  1. Check for Unique Values: Check the unique values in the ‘status’ and ‘time_to_kidney_failure’ columns.

print(df['status'].unique())
print(df['time_to_kidney_failure'].unique())
  1. Plot a Histogram: This will help you visualize the distribution of the time-to-event variable.

plt.hist(df['time_to_kidney_failure'], bins=20, edgecolor='k')
plt.title('Histogram of Time to Kidney Failure')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.show()
  1. Check for Zero Time Events: Verify if there are any instances where the time to kidney failure is zero and the status is 1 (event occurred).

print(df[(df['time_to_kidney_failure'] == 0) & (df['status'] == 1)])

By inspecting the results of these commands, you may be able to identify what’s causing the unexpected behavior in the Kaplan-Meier plot. If everything appears to be in order with the data, we may need more information about the dataset and the context of the analysis to diagnose the problem further. Feel free to share any relevant details or ask additional questions!

Hide code cell source
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from lifelines import KaplanMeierFitter

# Function to simulate data (same as before)
def simulate_data(n=1000):
    np.random.seed(42)
    age = np.random.normal(60, 10, n)
    sbp = np.random.normal(120, 15, n)
    scr = np.random.normal(1, 0.5, n)
    bmi = np.random.normal(25, 5, n)
    hba1c = np.random.normal(5, 1, n)
    hypertension = np.random.randint(2, size=n)
    smoke = np.random.randint(2, size=n)
    racecat = np.random.choice(['Black', 'Hispanic', 'Other', 'White'], n)
    educat = np.random.choice(['High School', 'K-8', 'Some college'], n)
    male = np.random.randint(2, size=n)

    # Coefficients to generate HRs greater than 2
    beta = np.array([0.7, 0.8, 0.9, 0.6, 0.7, 0.8, 0.7, -0.6, -0.5, -0.4, -0.3, -0.7, -0.6, -0.5, -0.4, 0.7])

    # Simulate the linear predictor
    linear_pred = (beta[0] * age + beta[1] * sbp + beta[2] * scr + beta[3] * bmi +
                   beta[4] * hba1c + beta[5] * hypertension + beta[6] * smoke +
                   beta[7] * (racecat == 'Black') + beta[8] * (racecat == 'Hispanic') +
                   beta[9] * (racecat == 'Other') + beta[10] * (racecat == 'White') +
                   beta[11] * (educat == 'High School') + beta[12] * (educat == 'K-8') +
                   beta[13] * (educat == 'Some college') + beta[14] * male)

    # Simulate time to kidney failure
    time_to_kidney_failure = np.random.exponential(scale=np.exp(-linear_pred))

    # Rescale time to desired range
    time_to_kidney_failure * = 300

    # Create status indicator (all failed)
    status = np.ones(n, dtype=int)

    # Create a DataFrame
    df = pd.DataFrame({
        'age': age,
        'sbp': sbp,
        'scr': scr,
        'bmi': bmi,
        'hba1c': hba1c,
        'hypertension': hypertension,
        'smoke': smoke,
        'race': racecat,
        'education': educat,
        'male': male,
        'time_to_kidney_failure': time_to_kidney_failure,
        'status': status
    })

    return df


# Simulate the data
df = simulate_data()

# Kaplan-Meier fitting
kmf = KaplanMeierFitter()
kmf.fit(df['time_to_kidney_failure'], event_observed=df['status'])

# Plot Kaplan-Meier survival function
plt.figure(figsize=[10,6])
kmf.plot()
plt.title('Kaplan-Meier Survival Curve')
plt.ylabel('Survival Probability')
plt.xlabel('Time')
plt.show()

# Plot 1 - KM failure function (1 - KM survival function)
plt.figure(figsize=[10,6])
plt.plot(kmf.survival_function_.index, 1 - kmf.survival_function_['KM_estimate'])
plt.title('Kaplan-Meier Failure Function')
plt.ylabel('Failure Probability')
plt.xlabel('Time')
plt.show()

# Histogram of Time to Kidney Failure
plt.figure(figsize=[10,6])
plt.hist(df['time_to_kidney_failure'], bins=20, edgecolor='k')
plt.title('Histogram of Time to Kidney Failure')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.show()

# Summary statistics
print(df.describe())

# Summary of Kaplan-Meier estimate
print(kmf.survival_function_.head())
_images/4572648c5e259dcc0a81fa2aaa1f14b639fc4365ba862bf9fe29790cb61e18d5.png _images/967977ad063665d41eff2919d5a09cc56ccd6f58c2dd11a0452341e38fe11a5d.png _images/e66baaa14be30915cccc2ea541bb54ca10af450503b71bc648888f5595fce6a5.png
               age          sbp          scr          bmi        hba1c  \
count  1000.000000  1000.000000  1000.000000  1000.000000  1000.000000   
mean     60.193321   121.062544     1.002917    24.906404     4.950726   
std       9.792159    14.961816     0.491727     5.135663     0.992380   
min      27.587327    75.894170    -0.509756    10.352757     1.823296   
25%      53.524097   110.906375     0.676000    21.312898     4.317395   
50%      60.253006   120.946157     0.999875    25.000923     4.981758   
75%      66.479439   130.933233     1.330458    28.334727     5.639123   
max      98.527315   167.896614     2.963119    41.215465     8.112910   

       hypertension        smoke        male  time_to_kidney_failure  status  
count   1000.000000  1000.000000  1000.00000            1.000000e+03  1000.0  
mean       0.505000     0.493000     0.50000            4.159984e-51     1.0  
std        0.500225     0.500201     0.50025            1.133528e-49     0.0  
min        0.000000     0.000000     0.00000            1.353364e-87     1.0  
25%        0.000000     0.000000     0.00000            5.374310e-71     1.0  
50%        1.000000     0.000000     0.50000            1.100292e-66     1.0  
75%        1.000000     1.000000     1.00000            3.268844e-63     1.0  
max        1.000000     1.000000     1.00000            3.557128e-48     1.0  
              KM_estimate
timeline                 
0.000000e+00        1.000
1.353364e-87        0.999
4.460628e-85        0.998
1.549092e-84        0.997
4.574651e-84        0.996

Certainly! Given the data simulation code, we can add a few more diagnostic checks and visualizations to ensure that the Kaplan-Meier (KM) survival function is plausible. Let’s also create the Kaplan-Meier curve using Python so we can visualize the survival data directly.

Here’s the extended code with the diagnostics and visualizations:

code here

  1. Kaplan-Meier Survival Curve: This plot shows the estimated survival probability over time.

  2. Kaplan-Meier Failure Function (1 - KM Survival Function): This plot represents the failure function, complementing the survival curve.

  3. Histogram of Time to Kidney Failure: This visualization will help you see the distribution of the time-to-event variable.

  4. Summary Statistics: These include means, standard deviations, and other descriptive statistics for all variables.

  5. Summary of Kaplan-Meier Estimate: A printout of the first few rows of the Kaplan-Meier estimate to examine the survival function values.

You can run this code to simulate the data, fit the Kaplan-Meier model, and visualize the resulting curves. If the Kaplan-Meier function is still not plausible, it may be useful to further investigate the underlying simulated data and the assumptions of the simulation model.

I’m sorry for the confusion. It seems like we need to diagnose the problem in the data simulation more carefully to fix the Kaplan-Meier plot. Let’s do the following steps:

  1. Inspect the Distribution of Times: We’ll print some descriptive statistics and plot a histogram to understand the distribution of the time_to_kidney_failure variable.

  2. Adjust the Time Scale: Based on the inspection, we’ll adjust the scaling factor accordingly to ensure the time scale makes sense.

  3. Plot the Kaplan-Meier Curve: We’ll plot the Kaplan-Meier curve using the adjusted time data.

Here’s the code:

import matplotlib.pyplot as plt
from lifelines import KaplanMeierFitter

def simulate_data(n=1000):
    # ... Same code as above ...

# Simulate the data
df = simulate_data()

# Scale the time variable
scaling_factor = 300
df['time_to_kidney_failure'] *= scaling_factor

# Print summary statistics and plot histogram
print(df['time_to_kidney_failure'].describe())
plt.hist(df['time_to_kidney_failure'], bins=30)
plt.title('Histogram of Time to Kidney Failure')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.show()

# Fit and plot the Kaplan-Meier curve
kmf = KaplanMeierFitter()
kmf.fit(df['time_to_kidney_failure'], event_observed=df['status'])
kmf.plot()
plt.title('Kaplan-Meier Survival Curve')
plt.xlabel('Time (years)')
plt.ylabel('Survival Probability')
plt.show()

This code first inspects the distribution of times, then fits and plots the Kaplan-Meier curve. If the plot is still not as expected, the descriptive statistics and histogram will help us diagnose the issue with the time scale further.

Please let me know how this works for you, and if there’s still an issue, we can continue to diagnose it together!

Hide code cell source
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from lifelines import KaplanMeierFitter

# Function to simulate data (same as before)
def simulate_data(n=1000):
    np.random.seed(42)
    age = np.random.normal(60, 10, n)
    sbp = np.random.normal(120, 15, n)
    scr = np.random.normal(1, 0.5, n)
    bmi = np.random.normal(25, 5, n)
    hba1c = np.random.normal(5, 1, n)
    hypertension = np.random.randint(2, size=n)
    smoke = np.random.randint(2, size=n)
    racecat = np.random.choice(['Black', 'Hispanic', 'Other', 'White'], n)
    educat = np.random.choice(['High School', 'K-8', 'Some college'], n)
    male = np.random.randint(2, size=n)

    # Coefficients to generate HRs greater than 2
    beta = np.array([0.7, 0.8, 0.9, 0.6, 0.7, 0.8, 0.7, -0.6, -0.5, -0.4, -0.3, -0.7, -0.6, -0.5, -0.4, 0.7])

    # Simulate the linear predictor
    linear_pred = (beta[0] * age + beta[1] * sbp + beta[2] * scr + beta[3] * bmi +
                   beta[4] * hba1c + beta[5] * hypertension + beta[6] * smoke +
                   beta[7] * (racecat == 'Black') + beta[8] * (racecat == 'Hispanic') +
                   beta[9] * (racecat == 'Other') + beta[10] * (racecat == 'White') +
                   beta[11] * (educat == 'High School') + beta[12] * (educat == 'K-8') +
                   beta[13] * (educat == 'Some college') + beta[14] * male)

    # Simulate time to kidney failure
    time_to_kidney_failure = np.random.exponential(scale=np.exp(-linear_pred))

    # Rescale time to desired range
    time_to_kidney_failure *= 300

    # Create status indicator (all failed)
    status = np.ones(n, dtype=int)

    # Create a DataFrame
    df = pd.DataFrame({
        'age': age,
        'sbp': sbp,
        'scr': scr,
        'bmi': bmi,
        'hba1c': hba1c,
        'hypertension': hypertension,
        'smoke': smoke,
        'race': racecat,
        'education': educat,
        'male': male,
        'time_to_kidney_failure': time_to_kidney_failure,
        'status': status
    })

    return df


# Simulate the data
df = simulate_data()

# Kaplan-Meier fitting
kmf = KaplanMeierFitter()
kmf.fit(df['time_to_kidney_failure'], event_observed=df['status'])

# Plot Kaplan-Meier survival function
plt.figure(figsize=[10,6])
kmf.plot()
plt.title('Kaplan-Meier Survival Curve')
plt.ylabel('Survival Probability')
plt.xlabel('Time')
plt.show()

# Plot 1 - KM failure function (1 - KM survival function)
plt.figure(figsize=[10,6])
plt.plot(kmf.survival_function_.index, 1 - kmf.survival_function_['KM_estimate'])
plt.title('Kaplan-Meier Failure Function')
plt.ylabel('Failure Probability')
plt.xlabel('Time')
plt.show()

# Histogram of Time to Kidney Failure
plt.figure(figsize=[10,6])
plt.hist(df['time_to_kidney_failure'], bins=20, edgecolor='k')
plt.title('Histogram of Time to Kidney Failure')
plt.xlabel('Time')
plt.ylabel('Frequency')
plt.show()

# Summary statistics
print(df.describe())

# Summary of Kaplan-Meier estimate
print(kmf.survival_function_.head())
Requirement already satisfied: lifelines in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (0.27.7)
Requirement already satisfied: numpy>=1.14.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (1.25.2)
Requirement already satisfied: scipy>=1.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (1.11.1)
Requirement already satisfied: pandas>=1.0.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (2.0.3)
Requirement already satisfied: matplotlib>=3.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (3.7.2)
Requirement already satisfied: autograd>=1.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (1.6.2)
Requirement already satisfied: autograd-gamma>=0.3 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (0.5.0)
Requirement already satisfied: formulaic>=0.2.2 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from lifelines) (0.6.4)
Requirement already satisfied: future>=0.15.2 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from autograd>=1.5->lifelines) (0.18.3)
Requirement already satisfied: astor>=0.8 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from formulaic>=0.2.2->lifelines) (0.8.1)
Requirement already satisfied: interface-meta>=1.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from formulaic>=0.2.2->lifelines) (1.3.0)
Requirement already satisfied: typing-extensions>=4.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from formulaic>=0.2.2->lifelines) (4.7.1)
Requirement already satisfied: wrapt>=1.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from formulaic>=0.2.2->lifelines) (1.15.0)
Requirement already satisfied: contourpy>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (4.42.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (23.1)
Requirement already satisfied: pillow>=6.2.0 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (10.0.0)
Requirement already satisfied: pyparsing<3.1,>=2.3.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from matplotlib>=3.0->lifelines) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from pandas>=1.0.0->lifelines) (2023.3)
Requirement already satisfied: tzdata>=2022.1 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from pandas>=1.0.0->lifelines) (2023.3)
Requirement already satisfied: six>=1.5 in /Users/d/Dropbox (Personal)/1f.ἡἔρις,κ/1.ontology/myenv/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib>=3.0->lifelines) (1.16.0)
_images/4572648c5e259dcc0a81fa2aaa1f14b639fc4365ba862bf9fe29790cb61e18d5.png _images/967977ad063665d41eff2919d5a09cc56ccd6f58c2dd11a0452341e38fe11a5d.png _images/e66baaa14be30915cccc2ea541bb54ca10af450503b71bc648888f5595fce6a5.png
               age          sbp          scr          bmi        hba1c  \
count  1000.000000  1000.000000  1000.000000  1000.000000  1000.000000   
mean     60.193321   121.062544     1.002917    24.906404     4.950726   
std       9.792159    14.961816     0.491727     5.135663     0.992380   
min      27.587327    75.894170    -0.509756    10.352757     1.823296   
25%      53.524097   110.906375     0.676000    21.312898     4.317395   
50%      60.253006   120.946157     0.999875    25.000923     4.981758   
75%      66.479439   130.933233     1.330458    28.334727     5.639123   
max      98.527315   167.896614     2.963119    41.215465     8.112910   

       hypertension        smoke        male  time_to_kidney_failure  status  
count   1000.000000  1000.000000  1000.00000            1.000000e+03  1000.0  
mean       0.505000     0.493000     0.50000            4.159984e-51     1.0  
std        0.500225     0.500201     0.50025            1.133528e-49     0.0  
min        0.000000     0.000000     0.00000            1.353364e-87     1.0  
25%        0.000000     0.000000     0.00000            5.374310e-71     1.0  
50%        1.000000     0.000000     0.50000            1.100292e-66     1.0  
75%        1.000000     1.000000     1.00000            3.268844e-63     1.0  
max        1.000000     1.000000     1.00000            3.557128e-48     1.0  
              KM_estimate
timeline                 
0.000000e+00        1.000
1.353364e-87        0.999
4.460628e-85        0.998
1.549092e-84        0.997
4.574651e-84        0.996

922. causality#

Hey, I’m thinking in terms of clinical medicine and clinical epidemiology, Kaplan-Meier analysis, time-to-event data. In that approach to analysis, time is an outcome. It’s sort of an end. It’s a judgment. If something happens in a short time, the judgment is that, you know, you did something bad or something bad was associated with your life. And so it’s a judgment. It’s an outcome. It’s a why, a letter why. By contrast, in longitudinal data analysis or hierarchical modeling, time is an explanation. You’re trying to explain variance in the dimension of time. And so it is an explanation. What do you have to say?


You’re highlighting an interesting dichotomy in how time is treated in different analytical models, particularly in the context of clinical medicine and epidemiology.

In Kaplan-Meier analysis and other time-to-event models, time is indeed treated as an outcome or a dependent variable. It measures the time until a specific event, such as death or recurrence of a disease. This approach is often used to compare different treatments or risk factors, and shorter times to events might be seen as indicative of a less effective treatment or a more serious condition.

On the other hand, in longitudinal data analysis or hierarchical modeling, time is often treated as an independent variable or a covariate. Here, the focus is on understanding how the response variable changes over time, or how time itself might explain variations in the response. Time can be seen as an explanatory factor, helping to model how outcomes change and evolve.

Both approaches are valuable in different contexts, and the choice between them often depends on the specific questions a researcher is trying to answer. Understanding when to view time as an outcome versus an explanatory variable can be essential for selecting the right statistical method and for interpreting results accurately.


923. learning#

What is the difference between supervised and unsupervised learning? Isn’t General Adversarial Networks (GANs) the ultimate, perhaps ideal, unsupervised learning? I recently journaled about this on posting number 904 and wish to revisit the topic in light of my emerging understanding of Gospel music (I’m enrolled in the Gospel Music University at the moment). Vaughn Brathwaithe together with his team of coaches including Lorenzo Bellini and Christian Lu have already left their mark:

The end of all our exploring will be to arrive where we started and know the place for the first time. - T.S. Eliot

  1. Supervised, \(Y\): Trained on labeled data, the algorithm learns a function that maps inputs to desired outputs.

  2. Unsupervised, \(X\): Trained on unlabeled data, the algorithm tries to find hidden patterns and structures within the data.

  3. Quasisupervised, \(\beta\): Utilizes both labeled and unlabeled data to improve learning efficiency and performance.

  4. Reinforcement, \(\epsilon\): The algorithm learns to make decisions by interacting with an environment, receiving feedback as rewards or penalties.

  5. Transfer, \(z\): This involves taking knowledge gained from one task and applying it to a related, but different task, often improving learning efficiency in the new task.

  6. Generative adversarial networks, \(\rho\): A part of unsupervised learning, where two networks (generator and discriminator) are trained together competitively. The generator creates data, while the discriminator evaluates it. They are trained together, often leading to the generation of very realistic data.

It appears that artist (musicians, illusionists, soccer maestros, etc.) engage in a game with an audience, opponent or adversary. So they are actively engaged together and over extensive periods of time, leading to the generation of very realistic and plausible chord progressions, melodies, lyrics, etc. for musicians, and tricks, illusions, etc. for illusionists, and moves, passes, dummies, side-steps, etc. for soccer maestros.

So when I found myself listening to this song from my playlist:

Tell me how U want it – Johnnie Gil (1993)

VI - III Dominant 7th
VI - III Dominant 7th
II - III Dominant 7th
?? - ??

I was able to predict the next chord progression, which was a II - V - I in the key of C major. I was able to do this because I had been listening to the song for a long time. I had been listening to the song for a long time because I liked it. I lik… – GitHub Copilot

Anyways, back to those question marks. Pardon Co-pilots enthusiasm. I’d earlier dictated the following to my phone to ChatGPT4:

III Dominant 7th: Discuss.


The “III Dominant 7” chord refers to a specific chord built on the third degree of the diatonic scale but altered to have a dominant seventh quality.

In traditional tonal harmony, the III chord in a major key would naturally be minor. However, a III Dominant 7 chord introduces a major third and a minor seventh above the root, creating a dominant seventh chord quality.

This can produce a very specific and unusual color in the harmonic progression. Since dominant chords often create tension that seeks resolution, using a III Dominant 7 chord can lead to an unexpected direction in the music. It may be used to modulate to a different key or to create a unique harmonic effect within the existing key.

In jazz and some contemporary music, this kind of alteration to traditional harmonic progressions is more common, allowing for complex and engaging harmonic landscapes. It can offer a surprising twist in the harmonic flow of a piece, creating engaging and stimulating music for listeners.


Where does it resolve to?


The III Dominant 7 chord can resolve to a variety of different chords, depending on the context and the composer’s intentions. It may resolve to the I chord, creating a strong sense of resolution and closure. It may also resolve to the IV chord, creating a more ambiguous and unresolved feeling. It may also resolve to the V chord, creating a sense of tension and anticipation. - Co-pilot (interjecting themselves once again)


The III Dominant 7 chord is a bit unconventional in traditional tonal harmony, so its resolution can vary depending on the context and the composer’s intention.

  1. To the VI Chord: One common resolution for the III Dominant 7 chord is to the VI chord. In this context, the III Dominant 7 chord can function similarly to a secondary dominant, leading to the VI as if it were a temporary tonic.

  2. To the IV Chord: In some instances, the III Dominant 7 might resolve to the IV chord. This could create a deceptive or unexpected resolution, as the ear might expect the III Dominant 7 to resolve to VI.

  3. Modulation: The III Dominant 7 chord could also be used to modulate to a different key, leading the music in a new and unexpected direction.

The actual resolution can depend heavily on the specific piece of music, the genre, and the composer’s creative intentions. Experimenting with different resolutions for the III Dominant 7 chord can lead to various intriguing and distinctive musical effects.


Based on the above, I’d say that the III Dominant 7 chord resolves to the VI chord. - Co-pilot (interjecting themselves once again)


Fascinating. So we have two AIs probably all trained on Microsofts supercomputer, Azure, and they are both able to predict the next chord progression… Not.

But essentially they are right. As you can tell from the openning sequence of the song, the III Dominant 7 chord resolves to the VI chord. Though the restless quality of the song is maintained by the use of the III Dominant 7 chord, which is not a diatonic chord that is normally found (in the key of A-flat Major in this case).

But my question marks were not about the III Dominant 7 chord. And were not about the VI chord, which by now was getting annoying (remember its the composer – Kenny “Babyface” Edmonds – manipulating the listener at this point).

This question:

VI - III Dominant 7th
VI - III Dominant 7th
II - III Dominant 7th
?? - ??

was about the last two chords. And here’s the answer:

II - III Dominant 7th
V Maj 7 (13th) - V Maj 7 (13th)


Atta boy, Babyface! From the secondary dominant chord (III Dominant 7), to the dominant chord (V Maj 7 [13th]). Halleluia, we feel the innevitable and ultimate resolution to the tonic chord – perhaps triad – in the key of A-flat Major.


But of course that almost never happens in RnB. So we get this:

II - III Dominant 7th
V Maj 13th - V Maj 7 (13th)

Verse:

II - V Maj 7 (13)
II - V Maj 7 (13)
III min 7 - VI (what are usually the diatonic chords in the key of A-flat Major)
V Maj 7 (13) - V Maj 7 (13))


To summarize, the adversarial cultural networks of the composer and the listener are engaged in a game of cat and mouse. The composer is trying to manipulate the listener into feeling a certain way, and the listener is trying to predict the next chord progression. - Co-pilot (interjecting themselves once again – but with such a profound statement this time round, anticipating my next question)


Aint that some shit!

924 austin#

  • before you walk out of my life

  • performed by 15yo monica at the time

  • written & produced by precocious dallas austin

  • he was 24 & we see him channel the basic diatonic chords of Gb Maj in circle-of-fifths

  • but with chromatic reharmonizations just when you are about to yawn

Gb Maj:
II min 7 - V Maj 7
III min 7 - VI min 7
II min 7 - V Maj 7
III min 7 - VI min 7
II min 7 - V Maj 7
III min 7 - VI min 7
bV min 7 b5 - IV min bb7 b5      III min 7 - bIII min bb7 b5
II min 7 - V Maj7

  • the highlights are chromatic reharms of the diatonic circle of fifths

  • a well beloved technique of mozart as seen in movement i of symphony no 40

  • dallas austin was unconsciously channeling modes of expression from ancestors past


Chromatic reharmonization refers to the process of altering or substituting chords in a musical piece to create more chromatic (colorful) harmonic movement. This technique can be used to add complexity, depth, and expressiveness to a composition. Here’s an overview of the concept and how it might be applied:

Basics#

  • Chromatic Notes: These are notes that are not part of the diatonic scale, which is the set of seven notes (five whole steps and two half steps) that make up major and minor scales. Chromatic notes are those that fall outside of this scale.

  • Reharmonization: This refers to the process of changing the chords or harmonic structure of a piece of music. Reharmonization can be done for a variety of reasons, such as creating a fresh interpretation, enhancing emotional impact, or accommodating a new melody.

Chromatic Reharmonization Techniques#

  1. Substitute Dominant Chords: You can replace a dominant chord with another dominant chord a tritone away. For example, instead of G7 (the dominant chord in the key of C major), you might use Db7. This substitution creates an unexpected sound that can add interest to a piece.

  2. Diminished Passing Chords: Diminished chords can be used as passing chords between diatonic chords to create smooth voice-leading and to add chromatic movement to a progression.

  3. Secondary Dominants: These are dominant chords that resolve to something other than the tonic chord. For example, in the key of C major, you could use A7 to resolve to Dm (the ii chord). This adds a chromatic note and creates a temporary feeling of modulation.

  4. Modulation to Distant Keys: This is a more dramatic approach where the composer changes the key center to a chromatically related key, such as a minor third or tritone away from the original key. This can create a surprising and rich effect.

  5. Chord Extensions and Alterations: Adding 9ths, 11ths, 13ths, or altered tones (like a sharp 5th or flat 9th) to chords can add chromaticism to the harmony.

  6. Chromatic Mediants: These are chords that are a third apart but do not share the usual diatonic common tones. For example, in the key of C major, using an Ab major chord would be a chromatic mediant relationship. These can be used to create a dreamy or mysterious effect.

Conclusion#

Chromatic reharmonization can be a powerful tool for composers and arrangers. It allows for the exploration of new harmonic landscapes and can infuse a piece with fresh emotional depth and complexity. However, it also requires careful consideration of voice-leading and tonal coherence, as excessive chromaticism can lead to a loss of clarity or direction in the music. Like any tool in composition, it should be used thoughtfully and with a clear intention.


Fact-checking on wikipedia:

It was written by Andrea Martin, Carsten Schack, and Kenneth Karlin, with production helmed by Schack and Karlin under their production moniker Soulshock & Karlin (Danish Wazungu boys, to be sure!). Initially helmed for fellow R&B singer Toni Braxton’s Secrets (1996) album, it was left unused and later re-recorded by Monica for her debut album Miss Thang (1995).

08/11/2023#

924. ayf#

amatu alu
amatu alu
yesu ma alya

family tv at 6:24/51:42

  • syncopation

  • 3/4

  • highlighted words are the only ones that are sung without syncopation

  • this is very typical of worship songs from the west nile region of uganda

the syncopation is very similar to the syncopation in the song mama by sauti sol - co-pilot interjection

  • usually accompanied by an adungu orchestra

  • drums are not a thing in the west nile region

  • we can see why from this song:

the syncopation is so strong that it would be difficult to play drums along with it - co-pilot interjection :)

  • not exactly. the syncopation is so strong, we …

don’t need drums to keep the beat. the syncopation is the beat. - pilot interjection