Skip to content
On this page

Londoniyyah - Part 7 - Deontological Ethics | Mohammed Hijab (2021-11-11)

Description

Londoniyyah - Part 6 - Deontological Ethics | Mohammed Hijab

To be updated about our content please subscribe and open the notifications.

BOOK A LIGHTHOUSE MENTOR

Are you or someone you know doubting Islam? Do you find yourself struggling to find answers? Do you have a hard time speaking to someone about Islam? Are you considering Islam but are unsure about certain concepts? Are you an activist, Imam or community leader who is unsure about how to handle questions related to science, philosophy, the Islamic moral code, etc.?

You are not alone. Over the course of the last decade or more there has been a rapid proliferation of content online and in academic institutions that has eroded the faith of some people.

Seeing the rise of this phenomenon , Sapience Institute is introducing a One to One mentoring service called LIGHTHOUSE.

BOOK A MENTOR HERE: https://sapienceinstitute.org/lighthouse/

VISIT our website for articles in English, Spanish and Turkish; mentoring service, learning platform and for speaker requests: https://sapienceinstitute.org/

Summary of Londoniyyah - Part 7 - Deontological Ethics | Mohammed Hijab

*This summary is AI generated - there may be inaccuracies.

00:00:00 - 00:35:00

discusses the conflict between universalizing principles and their consequences. Mohammed Hijab argues that there are no clear rules in morality, and that it is impossible to always know what is the right thing to do. Kant's ethics offers a possible solution to this problem, but it is not always applicable in the real world.

00:00:00 Mohammed Hijab discusses deontological ethics, which is the opposite of utilitarianism. Deontological ethics bases morality on intrinsic values, rather than on the consequences of an action.

  • 00:05:00 Mohammed Hijab discusses the deontological ethics of Islam and how it differs from the consequentialist ethics of modern society. He also discusses the principle of goodwill and how it plays into the deontological ethics of Islam. He states that, in order to be a good Muslim, one must follow the principle of goodwill and act as if their maxim of action were to become a universal law of nature. He gives four examples of how this principle can be applied in real life.
  • 00:10:00 Mohammed Hijab discusses the merits and demerits of deontological ethics, which is the opposite of consequentialism. He argues that if everyone did what was morally right, society would become chaotic. He also notes that deontological ethics can lead to paradoxes, such as when it is contradictory to claim that becoming a doctor should be immoral, but that a society without doctors would be dysfunctional.
  • 00:15:00 Jonathan Wolfe discusses the concept of "deontological ethics" in relation to the example of a carpenter. He argues that universalizing principles like this one are not always good because they can lead to society being unable to function. He also discusses the concept of honesty, and how it is always wrong to lie.
  • 00:20:00 Mohammed Hijab discusses the categorical imperative, or the moral principle that says it is always wrong to do something that would cause another person pain. He briefly discusses Kant's theory and how it compares to other ethical systems. He then moves on to discuss the hypothetical imperative and how it is not moral, while the categorical imperative is. He finishes with a discussion of Kant's religious background and whether he made a moral argument for the existence of God.
  • 00:25:00 Mohammed Hijab discusses deontological ethics and argues that universalizing a principle can lead to inconsistencies and problems.
  • *00:30:00 Discusses the conflict between universalizing principles and their consequences. Mohammed Hijab argues that there are no clear rules in morality, and that it is impossible to always know what is the right thing to do. Kant's ethics offers a possible solution to this problem, but it is not always applicable in the real world. Hijab argues that death can sometimes be seen as a consequence of not obeying a universalizing principle, and that this principle can conflict with other objectives of the same principle. He also argues that lying is always wrong, and that it is necessary to have a "football manager" in order to win wars.
  • 00:35:00 Mohammed Hijab discusses the strengths and weaknesses of consequentialism and deontological ethics. He argues that, while consequentialism has strengths, such as its ability to be sub-compartmentalized under one of three philosophical theories, it has problems when applied in conversation, as it makes its adherents difficult to deal with. He concludes the video by discussing virtue ethics, which he says is not really in use, but which can be useful in understanding consequentialism and its weaknesses.

Full transcript with timestamps: CLICK TO EXPAND

0:00:13 how are you guys doing and welcome to
0:00:15 this new session that we're going to be
0:00:16 talking about deontological ethics in
0:00:18 what is this the ontological ethics that
0:00:20 we are talking about we are going to
0:00:22 discuss what deontological ethics is
0:00:24 especially in conjunction with this main
0:00:25 proponent which is immanuel kant
0:00:27 probably one of the most
0:00:29 if not arguably the most prolific
0:00:32 enlightenment philosopher
0:00:34 that probably existed
0:00:35 but before we do all of that we're going
0:00:37 to read the poem and come back and
0:00:40 explain it
0:00:41 firstly
0:01:05 right
0:01:06 now what we're going to do now is look
0:01:08 at some of the things
0:01:09 that kant said but before we go into
0:01:12 that it should be noted that in the
0:01:13 liberal tradition really there are two
0:01:15 main schools of thought the first is
0:01:17 well you could say two schools of
0:01:19 thought which
0:01:20 act as a basis for liberalism
0:01:23 first as we've discussed last time which
0:01:24 is utilitarianism js mill we spoke about
0:01:27 jeremy bentham pain and pleasure the
0:01:29 great is good for the greatest number
0:01:31 okay that's one thing the other thing is
0:01:33 this deontological ethics which in many
0:01:35 ways is the opposite of utilitarianism
0:01:38 very ironically it's the opposite and
0:01:41 it forms the basis for many people
0:01:44 of uh what constitutes deontological uh
0:01:47 sorry what constitutes the base of
0:01:48 liberalism
0:01:50 so
0:01:50 actually it's important to note that in
0:01:52 moral philosophy or in ethics yeah
0:01:55 usually in the western tradition is
0:01:57 divided into three different things okay
0:02:00 it's divided into deontological ethics
0:02:03 consequentialism number two and number
0:02:06 three virtue ethics okay now we are not
0:02:08 going to cover virtue ethics in any
0:02:10 level of detail in this course virtue
0:02:12 ethics is connected with hellenistic
0:02:15 philosophy
0:02:17 in particular aristotle who wrote a book
0:02:18 on virtue ethics virtue ethics just
0:02:21 quickly is about
0:02:22 not prescriptive or descriptive even
0:02:25 it's to do with virtues as it says on
0:02:28 the tin okay so it's about for example
0:02:31 he would say there's like a goldilocks
0:02:32 zone i mean not this language but the
0:02:35 goldilocks zone of not being too hasty
0:02:37 not being too
0:02:39 brave uh not being too um
0:02:42 cowardly but being brave and that's
0:02:43 that's that's where the virtue is
0:02:45 there's it's about what kind of
0:02:46 characteristics you have rather than
0:02:47 what you should do and what you
0:02:48 shouldn't do whereas with both
0:02:50 consequentialism and deontological
0:02:52 ethics it's about what you should do and
0:02:53 what you shouldn't do whereas for
0:02:55 for aristotle and those who followed him
0:02:58 it was about how you should be rather
0:03:00 than what you should do
0:03:02 but this how you should be rather than
0:03:04 what you should do although it sounds
0:03:05 very nice
0:03:07 especially in self-development uh terms
0:03:09 it's not very applicable when it comes
0:03:11 to legislation i mean you can't really
0:03:13 put that into
0:03:14 law you can't translate that into law
0:03:16 but this is illegal this is legal this
0:03:18 is right this is wrong
0:03:19 and so on and so because of that it's
0:03:21 not featured
0:03:23 in the discourses in modern times as
0:03:25 much as the other two have which is
0:03:26 consequentialism on the one hand as i
0:03:28 say
0:03:29 and the ontological ethics which is what
0:03:30 we're going to be talking about today
0:03:32 consequentialism or utilitarianism which
0:03:34 we've discussed already which is the
0:03:35 greatest good for the greatest number is
0:03:37 a form of consequentialism okay is a
0:03:40 form of consequentialism it's very
0:03:42 important consequentialism is idea that
0:03:44 what you do is based on or the goodness
0:03:47 or the badness of whatever you do is
0:03:48 based on the consequences
0:03:50 so if i do something and it causes a lot
0:03:53 of harm for people
0:03:55 that consequence is a bad consequence
0:03:57 and therefore morality will be based on
0:04:00 whether the bad is more than the good or
0:04:02 the good is more than the bad
0:04:04 and that is utilitarianism obviously is
0:04:06 an extension of that is saying the
0:04:08 greatest good for the greatest number we
0:04:09 want to minimize the most pain for the
0:04:11 most people
0:04:12 now deontological ethics says
0:04:16 the ontological ethics
0:04:19 is it says no it says this is not how we
0:04:22 should base our morality
0:04:24 we should base our morality on something
0:04:26 referred to as good will okay we're not
0:04:29 doing it because of its consequences
0:04:31 we're doing it because
0:04:33 it's good in and of itself it has
0:04:35 intrinsic value
0:04:38 so on the one hand consequentialism
0:04:40 utilitarianism
0:04:42 or utilitarianism which is
0:04:44 consequentialist
0:04:45 has what you call instrumental value
0:04:49 okay instrumental value whereas
0:04:51 deontological ethics says no it's
0:04:53 intrinsic value i'm doing it not because
0:04:55 of the consequences
0:04:56 but because it's good in and of itself
0:04:59 okay now obviously as you're thinking
0:05:00 here where do we stand on the on the
0:05:02 issues as muslims
0:05:04 are we on are we consequentialists some
0:05:07 are we deontological
0:05:09 and just to put it out there and it will
0:05:11 come to that in the poem but we're
0:05:13 neither or
0:05:15 we're both
0:05:16 okay this is the answer
0:05:18 we say that there are some times where
0:05:20 being consequentialist is the right
0:05:22 thing to do
0:05:23 and other times where being
0:05:24 consequentialist
0:05:26 is not the right thing to do and and
0:05:27 following
0:05:29 doing something for the sake of doing it
0:05:31 is the right thing to do
0:05:32 and we will see why taking one of the
0:05:35 two
0:05:36 and not incorporating the other
0:05:38 whilst it solves a lot of problems
0:05:40 creates a lot of problems as well
0:05:42 so
0:05:44 just to to if you look at the third
0:05:46 slide here you'll see
0:05:50 some of the books okay that were written
0:05:52 so immanuel kant wrote a book called
0:05:54 groundwork of the metaphysics of the
0:05:55 moral in 1785.
0:05:58 now jeremy bentham we've discussed
0:06:00 already we discussed he was the mentor
0:06:01 of j.s mill who wrote the book on
0:06:03 utilitarianism which
0:06:05 basically acted as the basis for the
0:06:07 dominant ethic of today which is the
0:06:09 liberal ethic
0:06:11 he wrote his book introduction to the
0:06:12 principle of moral legislation four
0:06:14 years later obviously he had other works
0:06:16 before that but you can see there's a
0:06:17 conversation that's being had between
0:06:19 them and they are clearly aware of each
0:06:20 other's work
0:06:22 so these are competing ideas
0:06:25 okay and not only are they competing
0:06:28 in a
0:06:30 when we look in hindsight back no
0:06:32 actually they were competing with each
0:06:33 other as it stands like they knew of
0:06:35 each other's works and stuff like this
0:06:38 so we talked about this already that
0:06:39 unlike utilitarianism the theory starts
0:06:42 with the principle of goodwill
0:06:44 and
0:06:45 what you need to understand is kant's
0:06:48 supreme morality this is what he said
0:06:51 okay
0:06:51 so i'm going to read it out exactly in
0:06:53 his words
0:06:54 and we can think about it a little bit
0:06:55 maybe we can speak to the person next to
0:06:57 us he says this
0:06:59 act as if the maxim of your action were
0:07:02 to become by your will a universal law
0:07:05 of nature
0:07:07 so what is he saying
0:07:09 he's saying your actions
0:07:11 you have to imagine had everyone done
0:07:14 what i am doing would this be
0:07:18 good or bad
0:07:20 which in many ways and even though it's
0:07:23 against utilitarianism in utilitarianism
0:07:25 in some senses in many ways is similar
0:07:28 to it because you're still looking at
0:07:29 the aggregate aren't you still looking
0:07:30 at the end result
0:07:32 but in
0:07:34 direct terms it's not utilitarianism
0:07:37 because we're not looking at the direct
0:07:39 consequences in peer groups in society
0:07:41 we're looking at the overall we're
0:07:42 looking macro straight away
0:07:45 and that's why some have said actually
0:07:47 this is a form of utilitarianism but
0:07:49 it's called rule utilitarianism so they
0:07:51 divide utilitarianism to two things
0:07:53 act utilitarianism and rule rule
0:07:56 utilitarianism and they refer to this as
0:07:58 rule utilitarianism so in a sense
0:08:02 in an ironic kind of sense
0:08:04 it is a type of utilitarianism even
0:08:07 though it's against
0:08:08 the
0:08:09 the reasoning of utilitarianism which is
0:08:11 we look at the consequences he says no
0:08:13 we don't look at the consequences we're
0:08:14 doing it for the sake of doing it
0:08:16 and that's uncompromising okay it's very
0:08:18 uncomfortable we're going to see how
0:08:19 uncompromising is
0:08:23 so he gives
0:08:24 four examples and we're gonna oh yeah so
0:08:27 before we get there so you might have
0:08:29 heard some apologetics in islam you know
0:08:31 someone asked about the polygamy
0:08:32 situation why is polygamy allowed in
0:08:34 islam
0:08:35 and uh you might have heard it from zack
0:08:37 and i call ahmed didat on these kinds of
0:08:39 people and say well look there's more
0:08:40 women than there are men in the world
0:08:42 and if all the men got married and all
0:08:44 the women got married
0:08:46 then there'll be more men in the womb
0:08:47 this is a problem
0:08:49 so what kind of what kind of reasoning
0:08:50 is he using he's actually using a
0:08:52 canteen reasoning if you think about it
0:08:53 right
0:08:55 now should he be using a canteen
0:08:56 reasoning i mean
0:08:58 that's a question for another day but
0:08:59 the point is
0:09:01 there are limitations we're going to
0:09:02 come to see
0:09:04 of this canteen reasoning okay
0:09:07 do we believe in canteen reasoning in
0:09:08 all of the things in islam certainly not
0:09:11 okay there are some things which have
0:09:12 nothing to do with that
0:09:13 and we can show easy examples of this
0:09:16 but let's move on
0:09:18 so he can mentions four things
0:09:22 and he mentions i mean he mentions more
0:09:24 than four things but four main things
0:09:26 which he states
0:09:28 are if everyone did it if if everyone
0:09:31 did this
0:09:32 it would be an impossible state of
0:09:33 affairs it would be an impossible state
0:09:35 of affairs and therefore it's not just a
0:09:37 matter of the consequences
0:09:39 it's a matter of impossibility like the
0:09:42 society would be unrunnable
0:09:44 had these things been put in place
0:09:46 and the first thing he mentions is false
0:09:48 promises if everyone gave everyone false
0:09:50 promises
0:09:51 then society would not function it would
0:09:53 be an impossible state of affairs
0:09:56 that would that and that's true
0:09:57 to many extent we don't disagree with
0:09:59 this i mean there are some things that
0:10:00 if everyone done
0:10:02 that there would be a problem they say
0:10:04 an iphone i makes the whole world go
0:10:06 blind that's not true yes it would make
0:10:08 everyone have one eye by the way
0:10:10 monocular vision it's not they wouldn't
0:10:12 have it wouldn't ever because blindness
0:10:14 would require both eyes to be here and
0:10:16 anyway so this is false even if you
0:10:18 think about it just just logically
0:10:20 these expressions sometimes you think
0:10:22 about them just with a bit of
0:10:23 calculation everything falls apart
0:10:25 but the point is the idea is you know if
0:10:27 everyone's done it what would happen if
0:10:29 everyone gave each other false promises
0:10:30 what would happen if everyone
0:10:33 committed suicide and he gives them the
0:10:34 very third thing he talks about un um
0:10:38 untapped talent like if you have a
0:10:40 talent and you don't explore your talent
0:10:41 well if everyone who had a talent didn't
0:10:43 explore their talent what would happen
0:10:44 you'd have a talentless society
0:10:46 you know and uh
0:10:49 and he gives by the way interestingly he
0:10:50 gives the example of suicide as a fourth
0:10:52 example as well
0:10:54 now
0:10:55 before we go any further
0:10:57 i want you to speak to the person next
0:10:58 year and let's talk about the merits of
0:11:00 this
0:11:01 and think about the demerits of it as
0:11:03 well like okay you're already starting
0:11:06 to think this is what he's saying if
0:11:07 everyone done it
0:11:08 that's basically what he's saying
0:11:11 what can you think of are the merits and
0:11:14 the demerits of
0:11:16 this particular
0:11:18 moral philosophy
0:11:20 and so i'm going to give you three
0:11:21 minutes
0:11:22 and then we'll come back and talk to
0:11:24 each other
0:11:26 there was a book written by machiavelli
0:11:28 i'm not sure if anyone's read it
0:11:30 it's a prince
0:11:31 you know and it was written by
0:11:32 machiavelli to it was it was advice
0:11:36 to princes to to rulers and how to rule
0:11:39 a country
0:11:40 and it's the premise of the book if you
0:11:42 want to put it that way
0:11:44 is the unjustified means
0:11:46 do whatever it takes to get the job done
0:11:49 very pragmatic approach very let's get
0:11:51 the job done approach so he'll say
0:11:53 things i'm not you know word for word
0:11:54 but
0:11:55 you use as much
0:11:58 love as possible to get them to love you
0:12:00 and if that fails get them to fear you
0:12:01 you know
0:12:02 things like that you know
0:12:04 and some of it i mean obviously it's
0:12:06 very practical advice
0:12:08 but
0:12:09 you can if you go too far in that
0:12:11 direction as we'll see today
0:12:14 uh
0:12:14 it can lead to catastrophic uh
0:12:17 outcomes
0:12:18 you can justify all kinds of killing and
0:12:20 all kind you can do a lot of things but
0:12:22 before we get to those examples let's
0:12:24 talk about deontological ethics which is
0:12:26 the opposite of consequentialism which
0:12:28 says
0:12:28 the end's never justified it means it's
0:12:30 about the aggregate total and the
0:12:31 universalizing principle
0:12:33 what are some of the merits
0:12:35 uh
0:12:43 um just truthful society
0:12:47 if you uh if you
0:12:48 universalize the truthfulness and
0:12:51 everybody's telling the truth
0:12:53 then you get rid of from you know 90
0:12:56 percent of the world's problems probably
0:12:58 so that's uh yeah so things like
0:13:00 corruption yeah a noble thing yeah to do
0:13:02 yeah yeah you can see how why that can
0:13:05 be a really good thing yeah it also
0:13:06 addresses some of the problems with the
0:13:08 utilitarianism like for example gang
0:13:10 rape and things like that
0:13:12 yes yes some of the obvious
0:13:15 examples uh the problems with
0:13:17 consequentialism yes yes
0:13:19 good point anything else any other
0:13:21 it's a good point that you said that it
0:13:22 addresses a lot of the problems of
0:13:24 consequentialism
0:13:25 obviously the opposite can also be said
0:13:27 that but we'll come to this
0:13:29 um any other points
0:13:31 what's the strength of this theory
0:13:37 i mean look just think about one of the
0:13:38 examples that we talked about untapped
0:13:40 talent that's a really good way of
0:13:42 putting it isn't it
0:13:43 he's basically d he's
0:13:45 he's immoralizing
0:13:47 or demoralizing the idea of someone not
0:13:49 using their talent
0:13:51 and he's saying it's not it's not good
0:13:53 because if everyone did that we wouldn't
0:13:54 have we would have
0:13:56 what kind of society talent in the
0:13:57 society
0:13:58 if you think about it that's a very
0:14:00 powerful thing to think it's a very
0:14:01 powerful way of putting it
0:14:03 you know
0:14:05 and it's efficient economically you can
0:14:07 see the merits in this no
0:14:08 can you see the mercenaries any other
0:14:10 merits before we move on
0:14:17 yes more fair
0:14:18 you can get more fair society of it
0:14:20 depending on how the society is in the
0:14:22 first place
0:14:23 but potentially yeah i mean it depends
0:14:24 on what society is in the first place
0:14:27 you know
0:14:28 anything else
0:14:32 anything from your side
0:14:35 not really what about the demerits so
0:14:36 some of the
0:14:38 yeah you're ready now yeah
0:14:40 it seems to run into paradoxes sometimes
0:14:42 for example if you take the
0:14:44 universalizing principle
0:14:46 so if if if everyone becomes a doctor
0:14:49 according to that principle it would be
0:14:50 a society wouldn't function because we
0:14:52 need diversity in everybody else so
0:14:54 therefore according to that principle
0:14:55 becoming a doctor should be immoral but
0:14:57 then if if nobody becomes a doctor then
0:15:00 that also uh is not a desirable right so
0:15:03 it's funny enough because there's a book
0:15:05 that you should read on this or you know
0:15:07 begin a book
0:15:09 uh
0:15:10 by jonathan wolfe jonathan wolfe he's
0:15:13 actually got a book on moral philosophy
0:15:15 i think it's called moral philosophy
0:15:17 it's very easy to read and he actually
0:15:19 mentions this exact
0:15:22 example he mentions in fact the
0:15:24 carpenter example if everyone were to
0:15:26 become a carpenter but then he then he
0:15:29 says look this emanuel kant
0:15:31 this is what jonathan wolf is saying he
0:15:33 says emmanuel kant would say this is not
0:15:35 a very good interrogation
0:15:36 and the reason why is because the
0:15:38 universalizing principle here is not
0:15:40 that
0:15:41 everyone should be a carpenter or doctor
0:15:44 the universalizing principle is that
0:15:45 everyone should do the profession that
0:15:47 they are best at
0:15:49 do you see the point you see and see
0:15:51 that is a good that is a good response
0:15:53 to that refutation
0:15:56 so we have to get stronger now
0:15:58 we have to become more powerful we'll
0:15:59 get stronger especially when
0:16:02 you want to say anything
0:16:05 applying that principle then the
0:16:07 universalizing principle
0:16:08 there's uh why is arbitrary explain to
0:16:11 me why it's arbitrary yeah yeah
0:16:13 why can you uh apply it uh in that way
0:16:16 which is like everybody should uh
0:16:18 he's saying that he's saying that if if
0:16:20 society were to act in x
0:16:23 x-way where x here
0:16:26 and if that's universalized
0:16:29 and if that thing which is being
0:16:30 universalized makes society impossible
0:16:32 to run like everyone lying to each other
0:16:34 or contracts not being fulfilled
0:16:36 promises then that thing should be
0:16:37 outlawed
0:16:40 it's a strong argument
0:16:41 and we need something stronger than this
0:16:45 to refute it
0:16:47 by everybody just telling the truth it
0:16:49 would be also chaotic
0:16:51 in the sense of like
0:16:52 uh
0:16:53 sometimes you have to hide some some i
0:16:56 like this so this very good point but we
0:16:58 will get some good examples of this so
0:17:00 have you got any examples of this yeah
0:17:01 for instance
0:17:03 if you ask me something that
0:17:05 i don't want to answer in the sense
0:17:07 no but he will say if you don't want to
0:17:08 answer it and not answering it it's not
0:17:10 telling no it's not lying about it yeah
0:17:12 but
0:17:14 okay we'll come to this right
0:17:15 let's not jump the gun okay it'll we'll
0:17:17 come to this in fact that was something
0:17:19 he brought up himself someone comes to
0:17:20 the house but we can we can have
0:17:22 we can have stronger examples
0:17:24 and husbands so many
0:17:26 relations that no that is true you know
0:17:29 and just just for just for the sake of
0:17:31 for the sake of 10b here uh
0:17:34 reminder there is a hadith on this
0:17:38 that lying is not allowed except for
0:17:40 three things but we'll come to what
0:17:41 those three things are
0:17:43 afterwards like lying does he say that
0:17:46 it's always
0:17:48 wrong yes and we'll come to that we'll
0:17:49 come to that all right let's let's move
0:17:51 on to this um
0:17:52 lying is always wrong for him he's there
0:17:54 is never enough there is never and he's
0:17:56 very clear about that there's there's no
0:17:58 second interpretation and lying you can
0:18:00 never tell a lie
0:18:02 you know and why because it's not about
0:18:03 the consequences it's about it's
0:18:05 universalized you've made it into a
0:18:06 moral and therefore you can't lie
0:18:08 okay let's move on let's move on
0:18:14 we talked about
0:18:16 he he's he's interesting and he's a he's
0:18:19 a genius
0:18:20 you can't mock him because
0:18:23 you have to deal with him you know you
0:18:24 really do have to deal with this guy's
0:18:25 not a joke you can't just come and say
0:18:27 well i've got the example that's going
0:18:28 to finish him you have to think deeper
0:18:30 if you want to finish this guy
0:18:31 um
0:18:33 first and foremost
0:18:34 his
0:18:36 he has some really interesting examples
0:18:38 he says sometimes you can have virtues
0:18:40 yeah
0:18:41 like self-control is a virtue right
0:18:43 because those virtues are sometimes not
0:18:45 good to have in certain contexts
0:18:48 and it gives the example of a
0:18:49 self-controlled criminal
0:18:51 and look what he says i find this
0:18:53 fascinating it's very interesting to be
0:18:54 honest he goes the coolness of a
0:18:57 scrounger makes him more far more
0:18:59 dangerous
0:19:01 the coolness of a scrounger makes him
0:19:03 far more dangerous because he's saying a
0:19:05 chaotic scout scrounger like someone
0:19:07 who's like a scrounger right a ruffian
0:19:10 you know a thug but if he's chaotic he
0:19:12 gets caught quickly his emotions run
0:19:14 high he doesn't think but the ones who's
0:19:16 cool and calculated and sit down with
0:19:18 his friends and and plan the heist and
0:19:21 this and that and going into the bank
0:19:22 this guy's a dangerous one because he's
0:19:25 he's calculated he's systematic he's
0:19:27 machiavellian you know it's
0:19:29 you know maybe some films there you're
0:19:31 thinking about is it called the heist is
0:19:32 that the phone
0:19:33 it's called heat yeah is there a film
0:19:34 called the heist
0:19:36 there is a film called the heist you
0:19:37 know so these are these are cool header
0:19:39 scoundrels right there
0:19:40 obviously and that and by the way the
0:19:42 cool header scroungers are always seen
0:19:44 in a different way in i think theater
0:19:46 and movies sometimes even the
0:19:48 protagonist
0:19:50 like there was a series uh it's called
0:19:52 breaking breaking bad or something like
0:19:54 that
0:19:54 and this guy he's a scrounger
0:19:58 the main guy the what's his name
0:20:02 yeah so he's a scrounger basically he's
0:20:03 a cool headless crown drawer and he's
0:20:05 thinking about this and that and by the
0:20:07 way this guy he's portrayed as
0:20:10 machiavellian
0:20:12 he's exactly machiavellian
0:20:14 you know
0:20:15 he's portrayed as like he'll do any the
0:20:17 ends justify the means in any situation
0:20:19 he's a consequentialist you know so kat
0:20:22 would have a few words to say
0:20:23 about him
0:20:25 but before we get in there i i think we
0:20:27 need to get some key terms out of the
0:20:28 way yeah
0:20:29 so there are two important key terms
0:20:31 when we're dealing with can and we're
0:20:32 dealing with deontological ethics that
0:20:33 we need to be familiar with okay
0:20:35 the first of the two key terms is that
0:20:37 which is referred to as hypothetical
0:20:39 imperatives
0:20:41 okay
0:20:42 and the second of the key terms is
0:20:44 categorical imperatives
0:20:46 and i'll explain the difference between
0:20:48 the two
0:20:49 a hypothetical imperative is when you
0:20:52 say if you study for example i'm giving
0:20:54 an example right if you study you'll be
0:20:56 successful
0:20:57 so it's prudential in nature okay it's
0:21:00 it's telling you that if you do x then y
0:21:02 result would occur
0:21:04 if you go to the barber
0:21:07 you can get a haircut and that's not a
0:21:09 great example if you jog you'll get fit
0:21:10 yeah something like that
0:21:13 and so it's a hypothetical impressive it
0:21:14 doesn't really have a moral value
0:21:16 judgment there
0:21:18 it's just saying that if you do this
0:21:20 such and such result would come about so
0:21:22 it's prudential in nature and prudence
0:21:25 prudence is
0:21:27 your ability your competence your
0:21:28 ability to do something and get
0:21:30 something done
0:21:31 it's not saying this thing is good or
0:21:33 bad okay
0:21:34 whereas the word categorical imperative
0:21:37 is is moral now and when we say
0:21:39 categorical when we say kant's
0:21:41 categorical imperative we are talking
0:21:43 about the universalizing principle the
0:21:44 maxim that we've just spoken about
0:21:47 right so the prudential thing sorry the
0:21:49 hypothetical imperative is is not moral
0:21:51 whereas when we talk about categorical
0:21:54 imperatives that is
0:21:56 a moral now so it is on his let's put it
0:21:59 in religious language it's haram to life
0:22:02 for him in every circumstance it's wrong
0:22:04 it's haram
0:22:05 for for you
0:22:07 to not fulfill your talents
0:22:09 it's haram to commit suicide
0:22:12 it's haram not to keep your promises i
0:22:14 mean for the most part these are
0:22:15 actually haram anyway
0:22:17 for the most part but we do have some
0:22:19 significant exceptions
0:22:24 and here again
0:22:26 we have some of these examples now what
0:22:28 i want you to do let's pause it again
0:22:30 and these are four examples okay
0:22:33 and before we get to the problems of the
0:22:35 ontological ethics are we all itching
0:22:36 how do we refute this guy he seems very
0:22:39 coherent now he seems very strong
0:22:42 and this is basically i would say the
0:22:43 best the west has to offer
0:22:45 genuinely this is the best the gold
0:22:47 standard
0:22:48 western
0:22:49 thing emmanuel kant is this is the
0:22:51 reason why he celebrated because his
0:22:53 whole enterprise was
0:22:55 let me try and get morality outside of
0:22:57 religion
0:22:58 this is what he got it's you can see
0:23:00 already
0:23:02 how it competes with utilitarianism and
0:23:04 how it provides answers for some of the
0:23:05 problems but we'll come to the problems
0:23:07 of this uh of of the ontological effect
0:23:09 before we do it before we do that
0:23:11 let's think about it
0:23:13 you've got four things which he mentions
0:23:15 suicide
0:23:16 neglecting your talents false promises
0:23:18 and refusing to help others that's four
0:23:21 things yeah
0:23:23 speak to the person next to you for the
0:23:24 next three minutes
0:23:27 about those four things and tell me what
0:23:30 you think about them whether
0:23:32 why you think he's saying that where you
0:23:34 think any paradoxes may be
0:23:36 where you think any contradictions may
0:23:37 lie what are the merits and demerits
0:23:40 of
0:23:41 um of highlighting those four points in
0:23:43 his categorical imperative
0:23:45 yeah three minutes and then we'll come
0:23:47 back
0:23:50 there's a whole discussion about whether
0:23:51 his religious uh
0:23:53 he has christian background or
0:23:55 i mean i'd have to look at the
0:23:56 literature i don't think he
0:23:57 he's known to he's known to attack some
0:23:59 of the arguments of god's existence the
0:24:01 cosmological arguments the ontological
0:24:02 arguments
0:24:04 uh
0:24:05 did he make a moral argument himself
0:24:06 some some claim that he made a moral
0:24:08 argument for god's existence
0:24:10 um
0:24:11 because a lot of this why is there
0:24:12 morality how you know and so on some of
0:24:15 it is he would say it's a priori we know
0:24:17 morality a priori is something which is
0:24:19 in the mind this is like you're starting
0:24:21 off with it
0:24:27 i'm not sure if that's an exact quote
0:24:28 but yeah basically something like that
0:24:30 yeah
0:24:31 yeah his whole project was his whole
0:24:33 project was he's trying to get a moral
0:24:35 structure outside of religion
0:24:40 yeah yeah in spite of that yes he came
0:24:43 after him right
0:24:45 yes yes
0:24:47 even in terms of humans wait let me let
0:24:49 me make sure that yeah he did i just
0:24:52 like yesterday okay and there's a
0:24:54 discussion about whether david hume was
0:24:56 an atheist himself as well
0:24:58 he was an idea there's actually a
0:25:00 discussion somebody said no yeah yeah
0:25:04 some say he was like a secular christian
0:25:06 yeah he came after
0:25:08 he came after like he said i think but
0:25:09 they're both they were both in the in
0:25:11 the 1700s yeah i think about hume said
0:25:14 he took me out of my darkness or
0:25:16 something along the way yeah yeah
0:25:19 all right let's three minutes
0:25:21 those four examples again
0:25:26 so who's got the first contribution on
0:25:28 this side on the left side
0:25:30 before i pick someone up
0:25:34 uh refusing to help other people
0:25:37 yes um if if it was universalized
0:25:40 then you could have people
0:25:42 helping
0:25:44 those the two have bad intentions
0:25:46 that's a really good point you can have
0:25:48 a criminal you can who would ask you for
0:25:50 help yes so give me the maps of the bank
0:25:53 right and then if you refuse to help him
0:25:56 then
0:25:57 yes that's a paradox as well that is a
0:25:59 very good point that is a good point if
0:26:01 you're you're refusing to solve other
0:26:02 people
0:26:03 what kind of people are we talking about
0:26:05 you're refusing to help criminals
0:26:07 refusing to others in some cases it will
0:26:09 make more sense than others like for
0:26:10 example doctors have an obligation
0:26:13 not to refuse treatment to anybody who
0:26:15 comes into the hospital and that's you
0:26:16 can understand that
0:26:18 but when you talk about like you've just
0:26:20 said these examples that you've just
0:26:21 used it's a
0:26:22 entirely different kettle of fish
0:26:25 any other examples on this side
0:26:27 um one of the things we spoke about was
0:26:29 that i know it seems that there's like a
0:26:31 root assumption here that it's better
0:26:33 for society to function than for it not
0:26:34 to function
0:26:36 because if you're trying to make an
0:26:37 ethical system basically from i know
0:26:39 without religion or without reference to
0:26:40 god or whatever yeah so all of these
0:26:42 things is saying that you know if these
0:26:44 things are universalized it'll be
0:26:45 impossible for society to function he
0:26:46 mentions all these things but he hasn't
0:26:48 actually said why it's better for them
0:26:50 that is a very a true point and you know
0:26:52 you know who refutes him on this
0:26:54 um john mackie refutes him on this in
0:26:57 his book ethics
0:26:58 he has a book called ethics and he
0:27:00 spends a time a considerable chunk of
0:27:02 his time
0:27:03 just refuting him
0:27:05 on this and he and he mentions why
0:27:07 universalize anyway like well you
0:27:09 haven't provided any justification for
0:27:11 it and he starts giving possible reasons
0:27:13 why someone would want to universalize a
0:27:15 principle one of them is empathy which
0:27:17 we'll actually have a whole session on
0:27:18 just empathy by itself
0:27:20 which uh paul bloom actually wrote a
0:27:22 book called against empathy very
0:27:24 interesting but why are we
0:27:27 universalizing in the first place and
0:27:28 what justification do we have to
0:27:30 universalize this is a very good point
0:27:32 how did we get
0:27:33 from from from a description to a
0:27:35 prescription and you now
0:27:38 have the same issue that we had with
0:27:39 utilitarianism how do we how do we
0:27:41 justify the premise which is that in
0:27:43 fact
0:27:44 there should be a universalization there
0:27:46 should be a categorical imperative
0:27:48 that it should be functional
0:27:51 why shouldn't there be anarchy
0:27:54 why is lying wrong
0:27:56 for instance can we ask why lying is
0:27:58 wrong
0:27:59 yeah even that yeah no but he'll give
0:28:00 you an answer for that yeah his answer
0:28:02 is okay if so why is lying wrong what
0:28:05 would he say about this
0:28:09 right because if everyone done it
0:28:11 what would happen
0:28:13 you would have an impossible
0:28:14 society you see because everyone's lying
0:28:17 to each other you can't sell you can't
0:28:18 buy you can't marry you can't transact
0:28:22 if if society was based and that's
0:28:24 to a large extent that's actually a very
0:28:25 strong argument
0:28:29 but we'll move on
0:28:31 to the problems with deontological
0:28:32 ethics and lying is
0:28:34 uh i think the best example to use
0:28:36 because he mentions lying and obviously
0:28:37 he's
0:28:38 in all cases against lying
0:28:40 and in his book
0:28:43 on a supposed right to lie
0:28:46 because of
0:28:47 uh philanthropic concerns
0:28:50 he says what if a man comes to the house
0:28:53 okay the woman is there
0:28:55 and these guys are
0:28:57 i don't know gangsters military man
0:28:58 wherever they may be yeah where's your
0:29:00 husband
0:29:02 and obviously they want to come in and
0:29:03 kill him
0:29:04 if she says yeah he's just in the front
0:29:06 room
0:29:07 all right cool come inside and shoot him
0:29:08 kill him
0:29:10 well we could that that was the example
0:29:12 he said no in that situation she
0:29:14 shouldn't lie
0:29:15 she should tell the truth does he say
0:29:17 alternate truth
0:29:19 no no he doesn't say any of that he says
0:29:20 she is immoral for her to lie
0:29:23 he doubles down on that and then he gets
0:29:25 criticized and smashed to pieces because
0:29:27 of this and this is where
0:29:28 consequentialism
0:29:30 becomes a bit more alluring but the
0:29:32 question is why would it be alluring
0:29:34 because he on his is there inconsistency
0:29:36 there somewhere
0:29:37 well let's think about it
0:29:39 is
0:29:40 one of one of his one of his
0:29:42 universalizing principles surely is that
0:29:44 you don't want people to die right in
0:29:46 the society that you know the murder is
0:29:48 wrong
0:29:50 so if murder is wrong and that's
0:29:51 universalizing principle
0:29:54 then what causes murder
0:29:57 i mean should be wrong essentially
0:29:59 but he will say no because this is now a
0:30:02 conflict of universalizing principles
0:30:05 so you can have one universalizing
0:30:06 principle
0:30:07 which contradict the consequences of
0:30:09 which will contrad will contradict
0:30:11 another universalizing principle and
0:30:13 there's nothing he can do about it and
0:30:14 it's just a limitation of his system yes
0:30:16 could you say like i'm going to kill
0:30:18 myself and then if you don't then you're
0:30:20 lying it's not condition
0:30:23 i'm going to kill myself yeah and then
0:30:24 obviously lying is wrong no if you're
0:30:27 talking about
0:30:28 lies which are not straight faced lies
0:30:30 which are speaking in fork tongues and
0:30:32 stuff now if you say like i'm going to
0:30:34 kill myself obviously lange is wrong
0:30:35 right yeah so why would you know
0:30:36 scenario on kantian ethics
0:30:39 i'm going to kill myself he's killing
0:30:40 himself as long as
0:30:42 yeah yeah it's contradiction
0:30:44 so if i'm going to kill myself no what
0:30:46 would be wrong here is killing himself
0:30:48 if he mean would you choose to lie would
0:30:50 you choose to kill yourself in that
0:30:51 scenario on cancer no but
0:30:53 if you say i'm going to kill myself he's
0:30:54 telling the truth right there's no
0:30:55 problem there if you say oh if you say
0:30:57 i'm not going to kill myself
0:30:59 well that's that's fine that's that's
0:31:00 just him telling the truth about his
0:31:02 state there's no issue there the issue
0:31:04 is
0:31:05 now we've said okay yes go ahead
0:31:08 go to the example for example let's say
0:31:10 he's held at gunpoint this guy says okay
0:31:12 i'll do it myself i'll kill myself
0:31:14 right yes so i think that's what he's
0:31:17 getting too so if he was lying there and
0:31:20 he was saying that he just escaped that
0:31:21 that still would be wrong again
0:31:23 for me standpoint but if he does go and
0:31:25 kill himself it'd be wrong as well yeah
0:31:27 yeah oh yeah but that's you're telling
0:31:29 me about implementation here i'm telling
0:31:30 you about the theory right i'm saying to
0:31:32 you that look in and of itself there's
0:31:34 issues here this if you if you do think
0:31:37 a which is in this case tell them man
0:31:40 where your husband is so you can get
0:31:41 killed yeah
0:31:43 it's going to lead to his
0:31:45 if death
0:31:45 leads to his death his death is murder
0:31:47 one of your other universalizing
0:31:49 principles was that murder is wrong
0:31:51 so you're
0:31:53 the consequences of
0:31:55 uh of of not abiding by
0:31:58 or of obeying universal universalizing
0:32:01 principle a
0:32:02 was that it contradicted one of the
0:32:04 objectives of universalizing principle b
0:32:08 and there's nothing you can do about it
0:32:10 but then we can we can have stronger
0:32:11 examples of this
0:32:13 instead of just um
0:32:15 someone coming in so we've got a sorry
0:32:17 to say a child a one-year-old child and
0:32:19 if you don't tell me this i'm going to
0:32:21 rape the child and if you don't do this
0:32:23 we can make it more gruesome and on his
0:32:25 ethic you'd have to rip the child you'd
0:32:27 have to let the child get raped you'd
0:32:29 have to get nuclear weapons i would get
0:32:31 less if you if you if you don't do this
0:32:34 lying could save a nuclear attack you
0:32:36 can imagine that
0:32:38 so
0:32:39 actually if there's a nuclear attack
0:32:42 what's the point of a universalizing
0:32:43 principle if there's no one for it to be
0:32:44 implemented on right
0:32:49 you see
0:32:50 so this is why the prophet muhammed he
0:32:53 told us right don't lie
0:32:55 you know and this things but he says
0:32:57 hello he also said this you know that
0:32:59 you're not allowed to lie except for in
0:33:00 three situations and there are more
0:33:02 these are just three he mentioned
0:33:04 obviously we know life and death is
0:33:05 always an excuse it's always an excuse
0:33:08 in islam
0:33:09 but also
0:33:10 you know for the
0:33:12 for the partners for the spouses
0:33:15 and by by the way it's both ways someone
0:33:16 says oh this is sexist and feminist well
0:33:18 i say listen even on your paradigm
0:33:20 you're saying a man to woman woman's or
0:33:21 man it's both ways it's like that if a
0:33:24 man sometimes you what are you gonna
0:33:26 uh imagine if imagine if partners
0:33:29 were sitting in the house telling the
0:33:30 truth about what they thought
0:33:32 okay like
0:33:34 uh just imagine like sorry this is like
0:33:35 comedy sketch here sitting down
0:33:39 did you think my friend looked good yeah
0:33:41 yeah you look amazing today
0:33:44 did you think this is and they just talk
0:33:46 about this and that
0:33:47 who was better me or your ex-husband no
0:33:48 the ex-husband was actually better
0:33:52 and so on and so forth what would that
0:33:55 do to relationships it will destroy them
0:33:57 and the other thing is uh to try and
0:33:59 reconciliate between two people
0:34:01 so islam says if you have two friends
0:34:03 and you can give them a little white
0:34:04 lies this and just to bring them back
0:34:06 together because the ends justify the
0:34:08 means here
0:34:09 and the third and most controversial one
0:34:12 actually is completely uncontroversial
0:34:15 that war is deception
0:34:17 imagine being in war
0:34:18 okay and telling your enemy so where
0:34:21 have you put the bomb i put it in
0:34:22 coordinates
0:34:24 you're going to lose every war
0:34:26 that's why you need machiavelli
0:34:28 you don't need manual can here you need
0:34:30 machiavelli if you want if you go to war
0:34:32 you will have a football manager
0:34:34 we have a football manager forget about
0:34:36 war we have a football manager who's
0:34:37 going to be emanuel kant who's going to
0:34:38 give away all the formations and stuff
0:34:40 like that or we're going to have
0:34:41 machiavelli i think everyone will choose
0:34:42 machiavelli
0:34:45 you know or alex ferguson or whoever it
0:34:47 may be you know the point is is that
0:34:50 consequentialism makes sense in some
0:34:52 areas
0:34:54 but by the way islamically there are
0:34:55 some areas where there's never a
0:34:57 consequential reason for example if
0:35:00 if if if somebody said rape your
0:35:03 daughter otherwise will kill you
0:35:05 he can't say what's life or death i have
0:35:06 to rape her.
0:35:07 it's basically all scholars say that he
0:35:09 has to take the shot
0:35:10 because you can't defile somebody else's
0:35:13 rights in this
0:35:15 in this um in this dimension on this
0:35:18 domain of taking their sexual rights
0:35:20 like that especially through rape and
0:35:21 stuff you cannot do it
0:35:23 so if someone says so therefore
0:35:25 consequentialism is out the window here
0:35:28 consequentialism is out the window
0:35:29 because consequentialism
0:35:32 would mean that you you can you can do
0:35:33 lots of things you can go very far
0:35:36 you know so there we would side with
0:35:39 kant
0:35:40 but other places we would side with mill
0:35:42 and that's why we believe in the middle
0:35:44 the middle path
0:35:45 now
0:35:47 there are other examples of this but
0:35:49 what we are going to do is conclude this
0:35:51 session today i think we've covered some
0:35:53 very important points we've seen some of
0:35:54 the strengths and there have been some
0:35:55 considerable strengths of this theory uh
0:35:57 moral theory but there are some
0:35:59 considerable
0:36:00 weaknesses as well i think we've with
0:36:02 this we've seen that
0:36:03 and likewise with consequentialism
0:36:05 utilitarianism we've seen some of the
0:36:06 strengths okay the greatest good for the
0:36:08 greatest number you know all that stuff
0:36:10 but also some of the weaknesses you know
0:36:12 and um in the next session we're going
0:36:14 to be going into more details about some
0:36:16 of those strengths and weaknesses and
0:36:17 going into more business but if you
0:36:19 understand okay if one understands that
0:36:22 the west western philosophy western
0:36:24 moral philosophy ethics is these three
0:36:27 things
0:36:28 consequentialism
0:36:29 i mean obviously it's more but mostly
0:36:31 it's
0:36:32 somehow sub-compartmentalized under one
0:36:34 of these three things consequentialism
0:36:37 deontological ethics which we've talked
0:36:38 about today and thirdly virtue ethics
0:36:40 which is not really in use then you've
0:36:42 understood a lot you have actually
0:36:43 understood a lot
0:36:45 and so you'll be able to make you'll be
0:36:46 able to connect the dots you'll be
0:36:48 watching a film anything that's
0:36:49 consequentialist ethics you'll be
0:36:50 watching uh
0:36:51 maybe you shouldn't be watching these
0:36:52 kind of films anyways but you'll be
0:36:54 you'll be seeing something or on the
0:36:55 news or this and that someone reasoning
0:36:57 something or this is wrong because
0:37:00 and you'll realize that most of the time
0:37:02 they'll bring about
0:37:04 a justification which is either
0:37:06 deontological or consequentialist
0:37:08 like honestly 90 of the time that's what
0:37:10 they're gonna do
0:37:11 they'll bring about a justification
0:37:13 which is it's wrong because it causes
0:37:15 this consequence the consequences of the
0:37:17 action or it's wrong because it is just
0:37:18 wrong
0:37:19 intrinsically wrong
0:37:21 you've covered a lot there
0:37:22 but both of those things have problems
0:37:24 if you say if that's what you believe in
0:37:26 then you have to justify this standing
0:37:27 up and
0:37:29 in conversation
0:37:30 this makes you a very difficult person
0:37:32 to deal with
0:37:33 but we don't want you to be a very
0:37:34 difficult person on the loved ones and
0:37:36 the close ones but only on the enemies
0:37:39 and
0:37:40 the antagonistic ones
0:37:41 and with that we conclude
0:37:46 all right so let's do a discussion
0:37:48 we're going to have one consequentialist
0:37:50 camp