One common comment I usually get from a Grammarian is to replace ‘very’ word with stronger alternatives, and there is no lack of good synonyms. However, it could be vital to avoid “badjectives”—adjectives so generic and broad that they have virtually no impact (as Joel Schwartzberg calls themhttps://www.inc.com/joel-schwartzberg/improve-communication-by-avoiding-badjectives.html). These are used so broadly, that they are trivialized and lost any meaning—all ideas are “great”, impact is “amazing”, products are “innovative”, name it. We use them for a simple reason, they are readily available, instant and easy to use.
Going from “adjectives” to more impactful adjectives is simple—just to ask and answer WHY? question and then choose the most meaningful. An example:
❌ Great job, Lisa!
❔ WHY was Lisa’s accomplishment “great?” Because it could lead to a new revenue stream.
“It changes everything!” say remote work enthusiasts. Shining new technologies make online work possible and offer opportunities for asynchronous interactions. However, it appears that the problem is not technological, but rather social. We’re still stuck in old patterns of work and habits of presenteeism. What is worse, technology is aggravating things.
Recent study “Killing Time at Work” by Qatalog x GitLab brought some interesting figures. Good news—people do value flexibility and asynchronous work and believe this way is beneficial for both outputs and wellbeing. Some 80% of people believe they are more productive and create higher quality output when they have more flexibility over when they work. People are ready to resign if flexibility is limited (66%) and ready to accept lower paid roles with greater flexibility (43%). This shift is already in progress—two-third of responded said they have more flexibility compared to before the pandemic.
Bad news, people still spend striking 67 minutes, 13% of workday, in a “productivity theater,” showing their colleagues and managers they are present and ‘working’. Usually, it happens by signaling online presence at certain times of the day, but quite often (73%) it implies replying to notifications outside of the working hours. Multiplication of notifications is striking—the average knowledge worker now receives notifications from six applications. What is worse, by default notifications are noisy and competing for your attention by blinking, blipping, and buzzing. The cost of interruptions is well know—it takes an average of 23 minutes and 15 seconds to get back to the task . In many cases managers and leaders are sending mixed messages , officially encouraging flexible arrangements, while signaling the virtue of presenteeism and connecteism (in worst cases peppering calendar with mandatory meetings). The result is poisonous work-life blur, lack of control over your time, burnout and ultimately exodus of annoyed employees.
Way forward towards asynchronous work requires intentional culture shifts, which combine technological and cultural responses, suggests Qatalog x GitLab study. For instance, team could agree on minimizing distractions by agreeing to use less applications and setting up expectations—e.g. response to email is expected with 24 hrs, and one (and only ONE) instant messaging app is used for urgent (and really URGENT) conversations. Meetings and synchronous communication should be used more deliberately, less frequently but with a specific purpose.
📜 Being too scripted. An overly scripted speech sounds robotic and rarely captures people attention. Better to create a presentation and leave room for improvisation
🤐 Using too many filler words. Filler words—and, but, so, you know, ah, um—dilute your message and distract audience. Good news—you could improve signal-to-noise ratio by practicing. At Toastmasters we have special role ‘Ah-Counter’ to help people to improve
⁉ Using question inflections. Question could be a great tool to engage audience. Hovered, adding a question inflection to a statement makes you sound unsure of yourself. So, choose your questions wisely
💃 Swaying or standing too still. Body language is crucial for enforcing your message, however try to avoid two extremes—standing too still or swaying all the time.
🤩 Avoiding eye contact. Maintaining eye contact is vital for engaging audience. Online and hybrid meetings made it harder to keep eye contact. Two possible solutions—scan the faces through screen and look at the camera when you are speaking. Try and practice.
📊 Misusing visual aids. Visual aids are great, and online meeting offer new ways to use them. But please, refrain from walking them through the talk verbatim.
⌚ Mismanaging time. “I will be short” is THE worst opening phrase, as it guarantees overtime. Be on time, respect your audience.
Recent developments in AI resulted in impressive tools, like a model for image generation. For instance, DALL-E 2 grabbed many headlines, as it can create realistic images and art from a description in natural language. While the generated images are impressive, basic questions remains unanswered—how does the model grasp relations between objects and agents? Relations are fundamental for human reasoning and cognition. Hence, machine models that aim to human-level perception and reasoning should have the ability to recognize relations and adequately reflect them in generative models.
Recent paper “Testing Relational Understanding in Text-Guided Image Generation” puts this assumption in test. The researchers generated galleries of DALL-E 2 images, using sentences with basic relationships—e.g. “a child touching a bowl” or “a cup on a spoon”. Then they showed images and prompt sentences to 169 participants and asked them to select images that match prompt. Only some 20% of images were perceived to be relevant to their associated prompts, across the 75 distinct prompts. Agentic prompts (somebody is doing something) generated slightly higher agreement, 28%. Physical prompts (X position in relation to Y) showed even lower agreement, 16%. The chart shows the proportion of participants reporting agreement between image and prompt, by the specific relation being tested. Only 3 relations entail agreement significantly above 25% (“touching”, “helping”, and “kicking”), and no relations entail agreement above 50%.
The results suggest that the model do not yet have a grasp of even basic relations involving simple objects and agents. Second, model has a special difficulty with imagination, i.e. ability to combine elements previously not combined in training datasets. For instance, the prompt “a child touching a bowl” generate images with high agreement (87%), while “a monkey touching an iguana” show worse results (11%). “A spoon in a cup” is easily generated, but not “a cup on a spoon”, reflecting effects of training data on model success.