9.7: Why It Matters- Cell Communication - Biology

9.7: Why It Matters- Cell Communication - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Why identify the main components of a signal transduction pathway?

Imagine what life would be like if you and the people around you could not communicate. Social organization is dependent on communication between the individuals that comprise that society; without communication, society would fall apart.

As with people, it is vital for individual cells to be able to interact with their environment. This is true whether a cell is growing by itself in a pond or is one of many cells that form a larger organism. In order to properly respond to external stimuli, cells have developed complex mechanisms of communication that can receive a message, transfer the information across the plasma membrane, and then produce changes within the cell in response to the message.

In multicellular organisms, cells send and receive chemical messages constantly to coordinate the actions of distant organs, tissues, and cells. The ability to send messages quickly and efficiently enables cells to coordinate and fine-tune their functions.

While the necessity for cellular communication in larger organisms seems obvious, even single-celled organisms communicate with each other. Yeast cells signal each other to aid mating. Some forms of bacteria coordinate their actions in order to form large complexes called biofilms or to organize the production of toxins to remove competing organisms. The ability of cells to communicate through chemical signals originated in single cells and was essential for the evolution of multicellular organisms. The efficient and error-free function of communication systems is vital for all life as we know it.

Learning Outcomes

  • Differentiate between different types of signals
  • Describe how a cell propagates a signal
  • Describe how a cell responds to a signal
  • Discuss the process of signaling in single-celled organisms


What is the Smart Vaccine?

Nanobot engineering is defined as the engineering of “smart” functional systems at the molecular scale. The Nanotechnology dictionary, , defines Nanotechnology as “the design, characterization, production and application of structures, devices and system by controlled manipulation of size and shape at the nanometer scale (atomic, molecular and macromolecular scale) that produces structures, devices and systems with at least one novel/superior characteristic or property.”

The SV is an intelligent nanobot, programmable to do a specific function in digital immunity. Fig. 3.1 gives you the exploded view of SV. As it is scaled at the molecular level, it can be a component of a chip, it can roam through computer cables, it can reside on a storage device, and it can communicate with other SVs. That is why it is called the “Smart” Vaccine. It is the counterpart of the B cells or T cells in the human immunity system. We can also compare it to the neurons that transfer sensory and motor signals to and from the brain. The scaling down of CEWPS/SV to the nanomolecular level is the advantage that places the system ahead of its time. In this and the following chapter, we describe in more detail the anatomy of the SV nanobot.

Imagine you are asked to compose an ultra-short history of philosophy. Perhaps you’ve been challenged to squeeze the impossibly sprawling diversity of philosophy itself into just a few tweets. You could do worse than to search for the single word that best captures the ideas of every important philosopher. Plato had his ‘forms’. René Descartes had his ‘mind’ and John Locke his ‘ideas’. John Stuart Mill later had his ‘liberty’. In more recent philosophy, Jacques Derrida’s word was ‘text’, John Rawls’s was ‘justice’, and Judith Butler’s remains ‘gender’. Michel Foucault’s word, according to this innocent little parlour game, would certainly be ‘power’.

Foucault remains one of the most cited 20th-century thinkers and is, according to some lists, the single most cited figure across the humanities and social sciences. His two most referenced works, Discipline and Punish: The Birth of the Prison (1975) and The History of Sexuality, Volume One (1976), are the central sources for his analyses of power. Interestingly enough, however, Foucault was not always known for his signature word. He first gained his massive influence in 1966 with the publication of The Order of Things. The original French title gives a better sense of the intellectual milieu in which it was written: Les mots et les choses, or ‘Words and Things’. Philosophy in the 1960s was all about words, especially among Foucault’s contemporaries.

In other parts of Paris, Derrida was busily asserting that ‘there is nothing outside the text’, and Jacques Lacan turned psychoanalysis into linguistics by claiming that ‘the unconscious is structured like a language’. This was not just a French fashion. In 1967 Richard Rorty, surely the most infamous American philosopher of his generation, summed up the new spirit in the title of his anthology of essays, The Linguistic Turn. That same year, Jürgen Habermas, soon to become Germany’s leading philosopher, published his attempt at ‘grounding the social sciences in a theory of language’.

Foucault’s contemporaries pursued their obsessions with language for at least another few decades. Habermas’s magnum opus, titled The Theory of Communicative Action (1981), remained devoted to exploring the linguistic conditions of rationality. Anglo-American philosophy followed the same line, and so too did most French philosophers (except they tended toward the linguistic nature of irrationality instead).

For his part, however, Foucault moved on, somewhat singularly among his generation. Rather than staying in the world of words, in the 1970s he shifted his philosophical attention to power, an idea that promises to help explain how words, or anything else for that matter, come to give things the order that they have. But Foucault’s lasting importance is not in his having found some new master-concept that can explain all the others. Power, in Foucault, is not another philosophical godhead. For Foucault’s most crucial claim about power is that we must refuse to treat it as philosophers have always treated their central concepts, namely as a unitary and homogenous thing that is so at home with itself that it can explain everything else.

F oucault did not attempt to construct a philosophical fortress around his signature concept. He had witnessed first-hand how the arguments of the linguistic-turn philosophers grew brittle once they were deployed to analyse more and more by way of words. So Foucault himself expressly refused to develop an overarching theory of power. Interviewers would sometimes press him to give them a unified theory, but he always demurred. Such a theory, he said, was simply not the goal of his work. Foucault remains best-known for his analyses of power, indeed his name is, for most intellectuals, almost synonymous with the word ‘power’. Yet he did not himself offer a philosophy of power. How could this be possible?

Herein lies the richness and the challenge of Foucault’s work. His is a philosophical approach to power characterised by innovative, painstaking, sometimes frustrating, and often dazzling attempts to politicise power itself. Rather than using philosophy to freeze power into a timeless essence, and then to use that essence to comprehend so much of power’s manifestations in the world, Foucault sought to unburden philosophy of its icy gaze of capturing essences. He wanted to free philosophy to track the movements of power, the heat and the fury of it working to define the order of things.

To appreciate the originality of Foucault’s approach, it is helpful to contrast it to that of previous political philosophy. Before Foucault, political philosophers had presumed that power had an essence: be it sovereignty, or mastery, or unified control. The German social theorist Max Weber (1864-1920) influentially argued that state power consisted in a ‘monopoly of the legitimate use of physical force’. Thomas Hobbes (1588-1679), the English philosopher and original theorist of state power, saw the essence of power as state sovereignty. Hobbes thought that at its best and purest power would be exercised from the singular position of sovereignty. He called it ‘The Leviathan’.

Foucault never denied the reality of state power in the Hobbesian sense. But his political philosophy emanates from his skepticism about the assumption (and it was a mere assumption until Foucault called it into question) that the only real power is sovereign power. Foucault accepted that there were real forces of violence in the world, and not only state violence. There is also corporate violence due to enormous condensations of capital, gender violence in the form of patriarchy, and the violences both overt and subtle of white supremacy in such forms as chattel slavery, real-estate redlining, and now mass incarceration. Foucault’s work affirmed that such exercises of force were exhibits of sovereign power, likenesses of Leviathan. What he doubted was the assumption that we could extrapolate from this easy observation the more complex thought that power only ever appears in Leviathan-like form.

Power is all the more cunning because its basic forms can change in response to our efforts to free ourselves from its grip

In seeing through the imaginary singularity of power, Foucault was able to also envision it set against itself. He was able to hypothesise, and therefore to study, the possibility that power does not always assume just one form and that, in virtue of this, a given form of power can coexist alongside, or even come into conflict with, other forms of power. Such coexistences and conflicts, of course, are not mere speculative conundrums, but are the sort of stuff that one would need to empirically analyse in order to understand.

Foucault’s skeptical supposition thus allowed him to conduct careful enquiries into the actual functions of power. What these studies reveal is that power, which easily frightens us, turns out to be all the more cunning because its basic forms of operation can change in response to our ongoing efforts to free ourselves from its grip. To take just one example, Foucault wrote about the way in which a classically sovereign space such as the judicial court came to accept into its proceedings the testimony of medical and psychiatric experts whose authority and power were exercised without recourse to sovereign violence. An expert diagnosis of ‘insanity’ today or ‘perversity’ 100 years ago could come to mitigate or augment a judicial decision.

Foucault showed how the sovereign power of Leviathan (think crowns, congresses and capital) has over the past 200 years come to confront two new forms of power: disciplinary power (which he also called anatomo-politics because of its detailed attention to training the human body) and bio-politics. Biopower was Foucault’s subject in The History of Sexuality, Volume One. Meanwhile the power of discipline, the anatomo-politics of the body, was Foucault’s focus in Discipline and Punish.

More than any other book, it is Discipline and Punish in which Foucault constructs his signature, meticulous style of enquiry into the actual mechanisms of power. The recent publication of a now nearly complete set of Foucault’s course lectures at the Collège de France in Paris (probably the most prestigious academic institution in the world, and where Foucault lectured from 1970 to 1984) reveals that Discipline and Punish was the result of at least five years of intensive archival research. While Foucault worked on this book, he was deeply engaged in its material, leading research seminars and giving huge public lectures that are now being published under such titles as The Punitive Society and Psychiatric Power. The material he addressed ranges broadly, from the birth of modern criminology to psychiatry’s gendered construction of hysteria. The lectures show Foucault’s thought in development, and thus offer insight into his philosophy in the midst of its transformation. When he eventually organised his archival materials into a book, the result was the consolidated and efficient argumentation of Discipline and Punish.

D iscipline, according to Foucault’s historical and philosophical analyses, is a form of power that tells people how to act by coaxing them to adjust themselves to what is ‘normal’. It is power in the form of correct training. Discipline does not strike down the subject at whom it is directed, in the way that sovereignty does. Discipline works more subtly, with an exquisite care even, in order to produce obedient people. Foucault famously called the obedient and normal products of discipline ‘docile subjects’.

The exemplary manifestation of disciplinary power is the prison. For Foucault, the important thing about this institution, the most ubiquitous site of punishment in the modern world (but practically non-existent as a form of punishment before the 18th century), is not the way in which it locks up the criminal by force. This is the sovereign element that persists in modern prisons, and is fundamentally no different from the most archaic forms of sovereign power that exert violent force over the criminal, the exile, the slave and the captive. Foucault looked beyond this most obvious element in order to see more deeply into the elaborate institution of the prison. Why had the relatively inexpensive techniques of torture and death gradually given way over the course of modernity to the costly complex of the prison? Was it just, as we are wont to believe, because we all started to become more humanitarian in the 18th century? Foucault thought that such an explanation would be sure to miss the fundamental way in which power changes when spectacles of torture give way to labyrinthine prisons.

The purpose of constant surveillance is to compel prisoners to regard themselves as subject to correction

Foucault argued that if you look at the way in which prisons operate, that is, at their mechanics, it becomes evident that they are designed not so much to lock away criminals as to submit them to training rendering them docile. Prisons are first and foremost not houses of confinement but departments of correction. The crucial part of this institution is not the cage of the prison cell, but the routine of the timetables that govern the daily lives of prisoners. What disciplines prisoners is the supervised morning inspections, the monitored mealtimes, the work shifts, even the ‘free time’ overseen by a panoply of attendants including armed guards and clipboard-wielding psychologists.

Importantly, all of the elements of prison surveillance are continuously made visible. That is why his book’s French title Surveiller et punir, more literally ‘Surveil and Punish’, is important. Prisoners must be made to know that they are subject to continual oversight. The purpose of constant surveillance is not to scare prisoners who are thinking of escaping, but rather to compel them to regard themselves as subject to correction. From the moment of morning rise to night’s lights out, the prisoners are subject to ceaseless behavioural inspection.

The crucial move of imprisonment is that of coaxing prisoners to learn how to inspect, manage and correct themselves. If effectively designed, supervision renders prisoners no longer in need of their supervisors. For they will have become their own attendant. This is docility.

T o illustrate this distinctly modern form of power, Foucault used an image in Discipline and Punish that has become justly famous. From the archives of history, Foucault retrieved an almost-forgotten scheme of the canonical English moral philosopher Jeremy Bentham (1748-1832). Bentham proposed a maximal-surveillance prison he christened ‘The Panopticon’. Central to his proposal was that of an architecture designed for correction. In the Panopticon, the imposing materiality of the heavy stones and metal bars of physical imprisonment is less important than the weightless elements of light and air through which a prisoner’s every action would be traversed by supervision.

The design of the Panopticon was simple. A circle of cells radiate outward from a central guard tower. Each cell is positioned facing the tower and lit by a large window from the rear so that anyone inside the tower could see right through the cell in order to easily apprehend the activities of the prisoner therein. The guard tower is eminently visible to the prisoners but, because of carefully constructed blind windows, the prisoners cannot see back into the tower to know if they are being watched. This is a design of ceaseless surveillance. It is an architecture not so much of a house of detention as, in Bentham’s words, ‘a mill for grinding rogues honest’.

The Panopticon might seem to have remained a dream. No prison was ever built according to Bentham’s exact specifications, though a few came close. One approximation, the Stateville ‘F’ House in Illinois, was opened in 1922 and was finally closed down in late November 2016. But the important thing about the Panopticon was that it was a general dream. One need not be locked away in a prison cell to be subject to its designs of disciplinary dressage. The most chilling line in Discipline and Punish is the final sentence of the section entitled ‘Panopticism’, where Foucault wryly asks: ‘Is it surprising that prisons resemble factories, schools, barracks, hospitals, which all resemble prisons?’ If Foucault is right, we are subject to the power of correct training whenever we are tied to our school desks, our positions on the assembly line or, perhaps most of all in our time, our meticulously curated cubicles and open-plan offices so popular as working spaces today.

It was a bio-power wielded by psychiatrists and doctors that turned homosexuality into a ‘perversion’

To be sure, disciplinary training is not sovereign violence. But it is power. Classically, power took the form of force or coercion and was considered to be at its purest in acts of physical violence. Discipline acts otherwise. It gets a hold of us differently. It does not seize our bodies to destroy them, as Leviathan always threatened to do. Discipline rather trains them, drills them and (to use Foucault’s favoured word) ‘normalises’ them. All of this amounts to, Foucault saw, a distinctly subtle and relentless form of power. To refuse to recognise such disciplining as a form of power is a denial of how human life has come to be shaped and lived. If the only form of power we are willing to recognise is sovereign violence, we are in a poor position to understand the stakes of power today. If we are unable to see power in its other forms, we become impotent to resist all the other ways in which power brings itself to bear in forming us.

Foucault’s work shows that disciplinary power was just one of many forms that power has come to take over the past few hundred years. Disciplinary anatomo-politics persists alongside sovereign power as well as the power of bio-politics. In his next book, The History of Sexuality, Foucault argued that bio-politics helps us to understand how garish sexual exuberance persists in a culture that regularly tells itself that its true sexuality is being repressed. Bio-power does not forbid sexuality, but rather regulates it in the maximal interests of very particular conceptions of reproduction, family and health. It was a bio-power wielded by psychiatrists and doctors that, in the 19th century, turned homosexuality into a ‘perversion’ because of its failure to focus sexual activity around the healthy reproductive family. It would have been unlikely, if not impossible, to achieve this by sovereign acts of direct physical coercion. Much more effective were the armies of medical men who helped to straighten out their patients for their own supposed self-interest.

Other forms of power also persist in our midst. Some regard the power of data – that is the info-power of social media, data analytics and ceaseless algorithmic assessment – as the most significant kind of power that has emerged since Foucault’s death in 1984.

Those who fear freedom’s unpredictability find Foucault too risky

For identifying and so deftly analysing the mechanisms of modern power, while refusing to develop it into a singular and unified theory of power’s essence, Foucault remains philosophically important. The strident philosophical skepticism in which his thought is rooted is not directed against the use of philosophy for the analysis of power. Rather, it is suspicious of the bravado behind the idea that philosophy can, and also must, reveal the hidden essence of things. What this means is that Foucault’s signature word – ‘power’ – is not the name of an essence that he has distilled but is rather an index to an entire field of analysis in which the work of philosophy must continually toil.

Those who think that philosophy still needs to identify eternal essences will find Foucault’s perspective utterly unconvincing. But those who think that what feels eternal to each of us will vary across generations and geographies are more likely to find inspiration in Foucault’s approach. With respect to the central concepts of political philosophy, namely the conceptual pair of power and freedom, Foucault’s bet was that people are likely to win more for freedom by declining to define in advance all the forms that freedom could possibly take. That means too refusing to latch on to static definitions of power. Only in following power everywhere that it operates does freedom have a good chance of flourishing. Only by analysing power in its multiplicity, as Foucault did, do we have a chance to mount a multiplicity of freedoms that would counter all the different ways in which power comes to define the limits of who we can be.

The irony of a philosophy that would define power once and for all is that it would thereby delimit the essence of freedom. Such a philosophy would make freedom absolutely unfree. Those who fear freedom’s unpredictability find Foucault too risky. But those who are unwilling to decide today what might begin to count as freedom tomorrow find Foucault, at least with respect to our philosophical perspectives, freeing. Foucault’s approach to power and freedom therefore matters not only for philosophy, but also more importantly for what philosophy can contribute to the changing orders of things in which we find ourselves.

is the author of a book on Foucault and numerous essays in The New York Times , Critical Inquiry , and elsewhere. He is currently writing a genealogy of the politics of data. He teaches philosophy at the University of Oregon.

Paley isn’t the Common Ancestor of evolution and creationism

An article by Noah Berlatsky published online at The Atlantic this weekend comments on the creation debate of last week – with the view that creationism and evolution ‘share a common ancestor’ in William Paley. I’ve been researching and publishing on Paley for quite a while now, and while I think the article does some things well – overall I feel like it really misrepresents Paley’s ideas. Most importantly it reinforces the error that modern day intelligent design is fundamentally similar to Paley’s arguments for a designer (but using newer examples.) Paley’s argument is something quite different, perhaps even more consistent with theistic interpretations of Darwinism, but it’s not what Richard Dawkins or Michael Behe describe it to be.

Tonight’s a teaching night, so I only have time for a brief response, but I’ll make a few points:

The kind of creationism that Ken Ham advocates is utterly unrelated to natural theology. Paley doesn’t evoke scripture in the Natural Theology. His point is that by using natural arguments (in effect, by using the kind of experience and reason that is available to everyone regardless of their religious commitments) he can make the case for a deity. When Ken Ham effectively said that no empirical evidence could make him change his views on Christianity (he actually said that it would be impossible for such evidence to even exist) he reaffirmed that in his view, Scripture is the first source of knowledge, and the natural world secondary, and dependent upon that. For the sake of argument, at least, Paley reverses this: it’s because we see evidence of deity in nature that we are justified in considering the reliability of Scripture (which is what Paley’s Evidences of Christianity is about.)

That said, Paley is not a Deist. The article conflates unitarianism with Deism, in addressing the views of Samuel Clarke (and Newton) but claiming that there’s not a triune God, or that Jesus is not divinity himself, is not the same as rejecting the idea of an intervenionist God. Paley was an Anglican, and in his other writings makes it quite quite clear that he’s using natural theology as part of a larger argument to advocate for the Thirty Nine Articles of the Church of England. Surprisingly, perhaps, he does so with a firm commitment to religious toleration. It’s toleration with pluralism. Part of Paley’s point is to show that the starting place for religion ought to be human reason and natural observation, not the revelation of Scripture or the personal evangelical religious experience. He’s arguing against Evangelical Anglicans, Methodists and other Dissenters in doing so. However at the same time, he’s trying to show that natural religion need not lead one to unitarianism or deism. In part because he takes this middle position, he was widely criticized from both sides when the book first came out.

It’s not at all true that “the questions Darwin was answering were ones that Paley had posed.” An article by me on how this falsehood about Paley developed was published just two months ago. One of the most novel parts of Paley’s natural theology, as opposed to that of Clarke, John Ray or others, is that he eliminates considerations of the origins of the material world from his argument, focusing instead on how one can tell that there was purpose behind the arrangement of things. Paley in part does this as a response to Hume’s criticisms of natural religion. This also means that Darwin’s questions about the origins of diversity and complexity of life were not ones that Paley was trying to answer. However, as I relate in my recent article, it’s a question that editors of Paley and some of the authors of the Bridgewater Treatises attributed to Paley in the 1830s (around the time Darwin first read Paley.) Does this mean that Darwin misread Paley? Not necessarily, because Darwin was also interested in other aspects of Paley’s account of nature.

The only place Paley is mentioned by name in the origin of species, he is cited approvingly. That is when Darwin considers the “utilitarian objections” to his theory. Darwin was concerned that the mechanism of natural selection might appear to be unnecessarily cruel or wasteful, but he points out, via Paley that no creature has an adaptation that is primarily injurious to it. When Darwin mentioned Paley in his autobiography, it is in a sectoin in which Darwin considers his personal faith. He says the Paley’s theological conclusions (that from nature we see the marks of a God) no longer seem satisfactory to him. He does not suggest that the thing he finds unsatisfactory about Paley is Paley’s (nonexistent) account of origins. Misreadings of this passage in the autobiography feature prominently in the Paley-as-Darwin’s-foil myth as it emerges in the mid 20th century.

Paley’s main argument (I outline this in a 2009 article) is that the evidence of purposeful adaptation in nature comes from the correspondence between the laws of nature and the material world. Berlatsky’s article touches on this well. Paley considers that one entity might be responsible for setting the laws of nature, and another responsible for the creation of material structures. The fact that the latter seem to anticipate use of the former, that eyes are arrayed in ways that show adaptation, to the laws of optics, ears to the laws of acoustics, wings to aerodynamics, etc—these examples suggest that the entity responsible for the material world intended these structures to use those natural laws, which they do to achieve purposes (I’m simplifying Paley here, in the interest of time, and not recapitulating my entire article.)

But the real point of this is that it’s because we can know the laws of nature, and because things work within the laws of nature, that we’re able (before we even consider scripture, miracle or revelation) to infer a deity. This is very different from Ken Ham’s creationism, and it’s also very different from modern intelligent design, which argues from the insufficiency of natural laws to account for observed phenomena. In fact, Paley explains why he thinks that arguments from insufficiency are absoluetly the wrong approach to take (in part because the kind of God that is implied by that is obscure and unknowable.)

There’s a lot more to say her about the Political and social context of Paley’s thought. And I have some more articles in progress trying to get to those. But I wanted to very quickly address the main points of contention with this Atlantic article to prompt further discussion.

If anyone wants either my 2009 or 2013 articles on Paley and can’t access them online, contact me and I’ll send them to you.

29 Answers 29

Why a computer science degree?:

  • I worked with a developer who stored thousands of items in a HashTable and then only iterated through the values. He never accessed through a hash. He obviously didn't know how a HashTable worked or why you would use one - a CS degree might help with that.
  • When working with regular expressions, it seems easier for people with exposure to basic automata theory and formal languages to reason about what's going on and troubleshoot their expressions - a CS degree might help with that.
  • A developer fresh from school may be able to decompose problems in various paradigm mindsets (OO, functional, logical) immediately, while a new non-degree developer needs experience before they can do the same.
  • Schools teach computational complexity. Non-degree developers may feel what's best but a formal understanding is sometimes nice, especially when explaining results to a colleague.
  • A degree offers an introduction to many models of the machine - hardware, OS, common data structures, networking, VMs. With these models in the back of your mind, it's easier to develop a hunch where a problem lives when something goes wrong. Again, non-degree developers build the same models but it takes time.
  • Expert guidance through any discipline may help the learner avoid dead-ends and missed topics. Reading is great but it's no substitute for a great teacher.

This is not to say that a CS degree is necessary to be a great developer. Hardly. Some of the best developers I've worked with have no degree. A degree gives you a running start. By the time you graduate, you've (hopefully) written a fair amount of code in various languages and environments to solve many types of problems. This puts you well on your way to the 10,000 hours required to be an expert.

A second benefit is that it shows employers you're able to commit to a long-term goal and succeed. In many companies, I believe that's more important than what you learned.

In 40 years, I expect .NET and C# to be nothing more than a grievous pile of legacy code on obsolete operating systems.

But the fundamental computer science concepts will be just as lively as they were when Shannon, von Neumann, Knuth, Dijkstra, Hoare, and the others dug them out of the grounds of formal logic and math. over 40 years ago.

I use almost all the CS I studied in school (*) every single day at my job. If you want to work in programming language design, search engine optimization, quant analysis, or any similar field, I suppose you could do it without a relevant degree, but it seems like an awful lot of stuff to have to learn on the job. I am not particularly highly educated given my line of work many of my colleagues have PhDs in computer science and several of them have been professors of CS.

Getting my degree was tremendously worth it for me it has paid for itself many, many times over both in dollars and in satisfaction.

That said, I thoroughly understand your point. Most people who program computers have jobs that do not require a CS degree they require, say, a solid community-college-level background in practical programming plus keeping up with current industry trends. And that's fine. You don't need a degree in marine biology to run a successful aquarium store, and I think that aquarium stores are awesome. But it's awfully hard to get a job at Woods Hole if all you know how to do is raise goldfish.

(*) I have a B.Math in Applied Math and Computer Science from Waterloo.

It matters because technology does not remain static. Computer science is the basis for all digital technology. Most self-taught programmers last exactly one technology cycle because they lack the fundamentals to survive a major paradigm shift. Sure, there are exceptions to the rule, but a strong foundation in computer science greatly increases the odds of surviving a major paradigm shift.

It depends what you want to do. If your goal is mainly programming business software in the large, where the business problem and practical complexity management issues are the hard part, then yeah, a CS degree isn't going to help much. If, however, your goal is to program stuff where the main difficulty is on the technical end, then a CS degree is more useful. (Though I don't have a CS degree, so I feel like a big hypocrite for saying that, so feel free to add "or self-teaching in CS subjects".)

I'm sure there are plenty of programmers out there who are great at managing complexity, programming in the large and solving common business problems, but would be absolutely lost if you asked them to write a memory allocator, or a parallelism library, or a collections library, or an operating system, or a compiler, etc. I'm sure the opposite exists to a decent extent, too. Both have their place and deserve respect, but a CS degree helps much more on the technical side.

I don't think a CS degree is an absolute indicator that a person is a good software developer. In fact, I started my career as a programmer with a math degree, but with a strong CS bias (math and CS were integrated in my program of study). I think there are two reasons why it matters, overall.

1 - Because Engineers are not the Front End for Recruitment

Human Resources people are. And while I picture many people rolling their eyes, I say "thank goodness!" What's more important - that you let the engineers make stuff (or break stuff), or that you make them sort through 1000s of resumes and do 1000s of interviews?

So, we have HR people and HR people screen the candidates until we get to a key group that can be screened by engineers. HR people have learned over time that having a CS degree is a pretty strong indicator that the candidate knows something about developing software. Hopefully they also know that writing software for 20 years is a good indicator that the candidate can write software.

2 - Because having some sort of system about learning about CS is better than none

CS is a huge field with lots to know. And it's changing all the time. These days, I can safely say that the 75% of the coursework in my undergrad has become irrelevant to my career. And that my master's coursework from 5 years ago is depreciating rapidly. But when I started, I was glad that I paid a big institution to teach me something about computer organization, networks, good software engineering process, object oriented design, compilers, and the syntax/semantics of a major programming language that was currently marketable.

And I was glad it was in an environment where someone was paid to explain things to me when the book/website/lap project was not innately obvious.

And I was glad that I had access to a laboratory where computer health and the SDE were not my problem - I could more or less lock in and focus on a small part of the problem rather than also having to fix all the tools needed to solve the problem.

And while the courses didn't explicitly teach good communication, I think the only way you can really learn that is by working in teams - which IS a major part of many leading institutions offering CS degrees.

And a schedule with frequent feedback (ie, grades and exams) that let me know whether I really understood what I had been taught.

Those things combine in my mind to be worth more than any book on the subject, but it's certainly not the be all, end all. There's certainly things I would not mind seeing institutions of higher learning improve, and I think that about 10 years after you've graduated, the degree you originally recieved is less important than the work you've done since.

Voip-Pal Files $9.7 Billion Lawsuit to Protect its Pioneering Telecom Technology

In an Interview with CEOCFO Magazine, Voip-Pal Founder Emil Malak Explains how the Company Expects to bring Major Returns to Shareholders by Proving that Major Companies are Infringing on its Key Patents for Communicating over the Internet

As Malak told CEOCFO's Bud Wayne, companies like Apple, Verizon, AT&T and Twitter that are using the technology have been approached by Voip-Pal to license or acquire the patents, but have failed to obtain license(s) for the Voip-Pal's patents. As a result, "We had no choice but to launch legal actions against Apple, Verizon, AT&T, and Twitter in order to protect our intellectual property and the interests of our shareholders," Malak said.

As Malak explained in the interview, Voip-Pal's technology (then Digifonica) was conceived, and design work begun, in 2004. "We had the vision that within ten years, the internet would become the primary means for telecommunications," Malak said.

It was a revolutionary idea at the time, before the iPhone, when most people were making calls using landline-based phones or cell phones, with information traveling over phone lines and cellular networks. Showing great foresight, Malak and his team realized that, in the future, calls, media and messages would be primarily routed using the Internet, with a seamless transfer to cell phones, landlines, or computers wherever necessary.

The technology that would enable this massive integration of modern and legacy systems would not only make decisions about how and where to route calls and messages, but also have the capacity to bill the appropriate providers, as part of the path of each call or message might utilize multiple technologies carried by dissimilar networks. "Our routing system enables such integration to work seamlessly and also provides a solution to internet billing and metering," explained Malak.

Now, "After spending over 12 years in developing and testing, and more than seven years obtaining the related patents, Voip Pal is ready to license or offer for sale its patented technologies," said Malak.

The technology is already in widespread use by major telecom and social media companies, which are infringing on the patents. "We believe that we have hundreds of millions of indirect subscribers that are presently using our patented technologies, which are deployed everywhere," said Malak.

The company has filed $9.7 billion in lawsuits against Apple, Verizon, AT&T and Twitter, with more lawsuits planned. Given the strength of the patents and the power of the technology, Malak fully expects that Voip-Pal will prevail, and that the infringing entities will license or acquire the technology, bringing major returns to Voip-Pal's shareholders.

Let’s Begin

Radiation specialists use the unit “rem ” (or sievert ) to describe the amount of radiation dose someone received. We are going to use that unit throughout the sections. Without getting into technical specifics about that unit, it is enough to know that it indicates a measure of how much radiation energy is absorbed in our body. And, as we will see in other sections, the total energy that is absorbed and its effectiveness in causing change is the basis for determining whether health effects may result.

We’ll get into some detail later, but for a baseline—

  • 10 mSv received in a short period or over a long period is safe—we don’t expect observable health effects.
  • 100 mSv received in a short period or over a long period is safe—we don’t expect immediate observable health effects, although your chances of getting cancer might be very slightly increased.
  • 1,000 mSv received in a short time can cause observable health effects from which your body will likely recover, and 1,000 mSv received in a short time or over many years will increase your chances of getting cancer.
  • 10,000 mSv in a short or long period of time will cause immediately observable health effects and is likely to cause death.

Four ways to help employees “get” their EAP benefits

Based on our survey and on EAP utilization numbers, we know employees just aren’t wrapping their heads around this benefit. Here are five ways you can begin to change that in 2021:

  1. Focus on EAP during OE – As we recently detailed, 2021 OE won’t look like any other. It will be challenging, but it’s also a chance to improve. As you’re reconsidering you OE strategy, focus communications on the under-utilized benefits employees need right now, including EAP. That may mean emphasizing the parts of your EAP employees don’t already grasp. If your utilization numbers show that short-term therapy is popular but legal support has never been used, consider how you can highlight legal support during OE. Tweak your communications to your employee population.
  2. Work against stigma – mental health awareness is growing, but the stigma associated with treatment is still a hurdle. Sharing anonymous stories, encouraging senior leadership to emphasize EAP, and highlighting anonymity can all encourage employees to feel comfortable picking up their phones and reaching out for help. Read our post The Three Main Barriers to EAP Utilization for more.
  3. Create year-round campaigns – after OE, don’t press pause on EAP. Continue highlighting its benefits in your year-round communication campaigns. Employees may not need their EAP the moment you message them about it, but you never know when a reminder will come at exactly the right time. Whether by email, push communication, or Slack message, keep emphasizing to employees that help and support are just a call away.
  4. Go mobile – our survey revealed that just 7% of employers are using SMS messages to communicate benefits information, and 21% are using app notifications. Yet nearly all (99%) use paper or online enrollment materials. Given that a majority of Americans now own cell phones and most millennials report wanting to receive their benefits information in a different way, this dissonance is jarring. If your employees aren’t grasping your EAP, isn’t it time to try a method of communication that meets them where they already spend time? Use push notifications and SMS to promote under-utilized EAP services.

Food restriction reduces brain damage and improves behavioral outcome following excitotoxic and metabolic insults

Food restriction (FR) in rodents is known to extend life span, reduce the incidence of age-related tumors, and suppress oxidative damage to proteins, lipids, and DNA in several organ systems. Excitotoxicity and mitochondrial impairment are believed to play major roles in the neuronal degeneration and death that occurs in the brains of patients suffering from both acute brain insults such as stroke and seizures, and chronic neurodegenerative conditions such as Alzheimer's, Parkinson's, and Huntington's diseases. We now report that FR (alternate-day feeding regimen for 2–4 months) in adult rats results in resistance of hippocampal neurons to excitotoxin-induced degeneration, and of striatal neurons to degeneration induced by the mitochondrial toxins 3-nitropropionic acid and malonate. FR greatly increased the resistance of rats to kainate-induced deficits in performance in water-maze learning and memory tasks, and to 3-nitropropionic acid–induced impairment of motor function. These findings suggest that FR not only extends life span, but increases resistance of the brain to insults that involve metabolic compromise and excitotoxicity. Ann Neurol 199945:8–15

Should Science Be Required to Act As a Step in the Penal Process?

A Real Life Story

A woman in Florida is expecting baby number 3. She appears at a public health department for prenatal care and agrees to participate in a study for drug use and pregnancy.

She confesses that she has an addiction to cocaine (a known teratogen). She also answers yes to one of the questions asking if she would accept rehabilitative services if they were available. The information is passed on to the local authorities, her existing children are turned over to the state, and her in uteri child is taken upon birth and placed in foster care while the mother is forced into treatment so she can get her children back.

The case goes to court the mother sues the state of Florida and its public health system for breach of confidentiality. She loses, based on the required reporting of suspected child abuse. The case has gone to the Supreme Court.

There are a few ethical issues that are evident in this brief breakdown of this story.

  1. Mom disclosed willingly the information, she evidently had not acted in a manner to bring attention to herself, she disclosed the information believing it would be kept confidential.
  2. The case was brought to court and the argument was that the mother was abusing her unborn child by snorting cocaine while pregnant hence the defense in reporting based on the abuse that was purportedly to her unborn child. Can we mandate good health practices by a pregnant women or risk being charged with child abuse and neglect?
  3. Should willing disclosure be punished especially when it is a part of research?

There are many instances of ethical concerns when it comes to confidentiality and privacy matters in research. As a result of allowing a participants information to be revealed there can be insurmountable damages, it should only be done when absolutely necessary.

Watch the video: Cellular communication. Cells. MCAT. Khan Academy (September 2022).