I usually run this shell script after installing SBCL because to develop a program in Common Lisp practically, I usually need libraries. Thus, this script install Quicklisp and Ultralisp as a package manager and a package repository, respectively. Moreover, I set working directory for my Common Lisp projects to Develop in my home directory because when I put them in quicklisp/local-projects, I usually forget to backup or even forget where that the projects exist.
#!/bin/bash# My working directory is $HOME/Develop. You probably want to change it.rm-rf ~/quicklisp
rm quicklisp.lisp
wget https://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp \--eval'(quicklisp-quickstart:install)'\--eval'(ql-util:without-prompting (ql:add-to-init-file))'\--quitrm quicklisp.lisp
sbcl --eval'(ql-dist:install-dist "http://dist.ultralisp.org/" :prompt nil)'--quitif[-e ~/.config/common-lisp ];then
cp-rp ~/.config/common-lisp ~/.config/common-lisp.bak-$(date-I)-$$fi
mkdir-p ~/.config/common-lisp
cat<<EOF > ~/.config/common-lisp/source-registry.conf
(:source-registry
(:tree (:home "Develop"))
:inherit-configuration)
EOF
I wonder if I can use pyenv and pipenv on Fedora Workstation 40 although I don't use these utilities in my personal projects. And the answer is yes.
The steps are as follow:
Install dependencies
sudo dnf builddep python3
Install pyenv
curl https://pyenv.run | bash
I know that you don't like running Bash script immediately from cURL.
Modify .bashrc
Pyenv normally told you to append these lines to .bashrc, and the restart your terminal.
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
# Restart your shell for the changes to take effect.
# Load pyenv-virtualenv automatically by adding
# the following to ~/.bashrc:
eval "$(pyenv virtualenv-init -)"
Install Python via Pyenv
pyenv install 3.10 # You can choose other versions
I know that we can install Python using configure and make like many other packages, but you can use pyenv as well.
I pass a pointer of BBox to ShowWidth, which according to @meeusdylan's post, it slows down my program because the garbage collector has to determine if a BBox must be in stack or heap.
In the alternative code below, I don't use pointer.
So, what I worried was true. MOVSD_XMM is for copying value from a member of a BBox in memory to a register one-by-one. You may see MOVSD_XMM was called 4 times per each ShowWidth call.
I didn't measure which one is faster or slower. I've heard that Skymont support loads per cycle. And, I wish they meant loading float64 using MOVSD_XMM as well. So, copying entire BBox is hopefully fast. And, at least, as far as I have been told, a BBox will definitely remain in stack without a need of checking by the GC.
Moreover, passing by value seems to comply to Go community better than pointer. So it will look familiar, and everyone will be happy to see passing by value.
My plan is avoiding pointer by default, and I will use it only when I have to. About performance, I think I may have to benchmark before using a pointer. Or if the speed is acceptable, I won't optimize.
I worked on a TODO code assignment for showing off my skills, and more importantly, showing my weak points. I coded in Golang and Masterminds/squirrel. Later, I ported only the SQL generation part to Clojure to compare and discuss why I prefer Clojure, which I have usually been asked about or even met with opposition for. I will discuss function by function and type by type. The first function is makeStatement.
The makeStatement function's name clearly indicates it utilizes the builder pattern. However, to improve readability and avoid cluttering the function with too many details, it delegates order and filter information building to separate functions: buildOrders and buildFilters. Next one is the make-statement function in Clojure with HoneySQL.
In Clojure version, the main difference is that filters->map and orders->map are pure functions, which won't mutate or change their inputs like buildOrders and buildFilters do with the builder in Golang. The next one I will show contract or type or spec.
In Golang, to complement function definitions, I define custom types for conveying order and filter information. While using strings for this purpose is also acceptable, I prefer using types to leverage Go's static analysis and prevent typos.
On the other hand, in Clojure, I defined similar contracts using Clojure Spec. Here, the information about orders and filters being collections of tuples resides within the Spec definition itself, unlike the separate function definitions in Golang.
buildOrders looks very familiar. It reminds me of Pascal, which I learned 30 years ago. This suggests that the code utilizes a well-established approach, making it understandable to most programmers even without prior Go experience. However, I've identified potential code duplication between the type definition and the switch-case within this function.
The Clojure function orders->map might have surprised my younger self from 30 years ago. However, it leverages Clojure Spec to its full potential. Spec validates the input to the function, and provide clear explanations when validation fails. Furthermore, orders->map is a pure function, meaning it doesn't modify its input data. Both the input and output data leverage Clojure's persistent maps, a fundamental data structure known for immutability. Therefore,
unit testing for the orders->map function is relatively straightforward. I have no idea how to write a unit test for buildOrders in Go.
In conclusion, Go's main advantage lies in its familiarity for programmers from various languages like Pascal, Java, JavaScript, Python, and C. This familiarity extends to the builder pattern, which offers the additional benefit of auto-completion in IDEs and smart editors. On the other hand, Clojure and HoneySQL emphasize using data structures, especially persistent maps, for building queries.
While auto-completion is less important for Clojure programmers who are comfortable manipulating basic data structures, Clojure Spec offers significant advantages in data validation.
Spec can explain what happens when data fails to meet the requirements, promoting better error handling and adherence to the open-closed principle (where code can be extended without modifying existing functionality). Additionally, Clojure Spec is not part of the function definition itself, allowing for greater flexibility and potential separation of concerns.
More importantly, writing unit tests in Clojure with HoneySQL is significantly more efficient. Because orders->map is based on persistent data structures, it avoids modifying the input data. This immutability, along with the ease of comparing maps, makes them ideal for testing.
I struggled to fix bugs in my custom bilingual alignment tree. Unlike standard trees from libraries like React or Tcl/Tk, this tree has unique data manipulation requirements. Most of the bugs stemmed from these data handling complexities, not from the visualization itself. This is just one example of the challenges I faced. I encountered similar issues repeatedly, for instance, when building directed acyclic graphs (DAGs) for word breakers.
I've applied various programming techniques I've learned, but unfortunately, none of them seem to solve my problem. These techniques included tracking my bathroom habits (yes, you read that right!), coding in Ruby and Python with OOP, using static checking in Java, C++, and Rust, and applying a functional programming approach like Haskell's function coloring.
Surprisingly, the answer came when I started working with Lisp.
Non-destructive data transformation
I initially coded in LOGO and Racket (DrScheme), both members of the Lisp family, but these experiences were limited to toy problems. As a result, I didn't grasp the true benefit of Lisp until I revisited it by re-implementing a part of my word breaker in Racket. Notably, Racket is another Lisp dialect that heavily relies on singly-linked lists, the default data structure in Lisp. Unlike array-based lists common in other languages, singly-linked lists manipulated with the CONS function in Lisp avoid destructing existing data. This means constructing a new list based on an existing one doesn't require copying the entire original list, making it ideal for frequent operations involving large datasets. CONS itself operates in constant time, further enhancing efficiency. With Lisp, I can focus on using CONS for data manipulation without concerning on less important things like class design and function coloring. Additionally, since Lisp avoids data destruction, I can leverage the interactive environment conveniently without reloading data from files. While I acknowledge some past frustrations, using Lisp has finally allowed me to achieve what I've been striving for for years. Furthermore, adopting this functional programming approach can be beneficial in other languages, although limitations exist. These limitations will be discussed in the section on building collective team value through programming language choices.
Sustainability
I've been coding for a long time, starting with Python 1.6. As Python evolved through versions 2.X and now 3.X, I've seen the benefits of new features alongside the challenges of breaking changes. While Python's rapid development is great for startups, it can make long-term maintenance difficult for projects with strict stability requirements, such as those for governments or foundations.
Similar challenges exist in other languages. While Java boasts strong backward compatibility, the vast developer landscape can lead to codebases with varying levels of modernity (pre-generics, pre-Java 8, etc.), necessitating additional maintenance efforts. Rust, while powerful, shares Python's characteristic of rapid evolution.
The Stability of Common Lisp
Like other languages, Lisp has evolved over time. However, Common Lisp, the standardized dialect established in 1994 (with Lisp's roots dating back to 1959), offers a key advantage: stability. Code written in Common Lisp is less susceptible to breaking changes due to language updates. Additionally, powerful macros allow for the creation of new syntactic constructs at the library level, without modifying the core language specification that impacts everyone.
Furthermore, my observations of Common Lisp projects on GitHub and elsewhere reveal a remarkable consistency in coding style, even when developers aren't actively collaborating. This suggests a strong community culture that prioritizes code clarity and maintainability.
Building collective team value through programming language choices
Since most of the problems I encounter seem universal, I believe adopting a Lisp-style coding approach could benefit the entire team. However, a complete language switch might not be realistic for many developers and organizations. As a compromise, I attempted to implement Lisp-style coding in Java, but this proved unsuccessful. While elements like parentheses, CAR, CDR, and macros are easy to identify and often the target of complaints, they're not the main obstacles.
However, an experienced programmer once mentioned the difficulty of building complex programs in LOGO despite its easy start. This suggests that the core challenge might lie in the Lisp programming paradigm itself. Introducing Lisp concepts into Java often leads to endless debates rather than progress, even with extensive explanations. However, for team members who grasp the Lisp paradigm, overcoming the hurdle of parentheses becomes a less significant issue.
Searching for how to accomplish something in Java won't yield results written in Lisp style, which can lead to frustration and confusion for developers wondering if their approach is incorrect. In contrast, searching for the same task in Lisp will provide solutions that leverage proper Lisp idioms. This continuous cycle of coding and searching in Lisp builds the common value of the team.
Clojure
While Common Lisp offers a rich ecosystem for web development (front-end, back-end), and mobile apps, some developers may find its toolset less extensive compared to popular languages like JavaScript, Go, or Java. Clojure, a dialect of Lisp, addresses this by seamlessly integrating with Java/Scala/Kotlin libraries. ClojureScript provides a similar bridge to the JavaScript world, and ClojureDart is under development for Dart environments. Beyond its powerful concurrency features and data structures, Clojure and its sub-dialects make Lisp a viable option for building modern applications. A prime example is Nubank, a major online bank with over 100 million customers, which demonstrates the power of Clojure for large-scale applications.
Static type analysis excels in error checking, autocomplete, and native code generation. Conversely, dynamic typing shines in interactive environments like Jupyter Notebook, where type inference can struggle due to the unpredictable nature of future code. Languages can leverage both.
It seems managing singly-linked lists with shared sub-lists using RAII can be tricky. While I experimented with Rust's reference counting, it led to stack overflows for lists above 300,000 elements. Memory arenas appear more suitable, but effectively handling arena lifetimes in nested lists with varying types is challenging.
You can check my library at the repository at Codeberg. Perhaps, you want to fork it, or even take over the project.
I'm tired of checking contents of my database in RocksDB by coding. So I created a GUI tool called DuHin, for browsing RocksDB Databases using cl-gtk4 and cl-rocksdb.
I want to develop my Rust library more interactively with Lisp-style read-eval-print loop (REPL). So I wrote this post to show how I used Excvr - yet another REPL for Rust, and inf-excvr - yet another Excvr wrapper for Emacs, step-by-step.
I cloned the project the wordcut-engine project, and open README.md in Emacs.
I ran inf-evcxr, which it created a new inf-evcxr window.
I add the project as dependency by running inf-evcxr-add-dep-current-project.
I marked a region over use commands, and ran inf-evcxr-eval-region.
I marked a region over initialization code block, and ran inf-evcxr-eval-region.
I moved the cursor to let txt = "หมากินไก่"; and I ran inf-evcxr-eval-line.
I moved the cursor to wordcut.put_delimiters(txt, "|"). Then I ran inf-evcxr-eval-line. The REPL printed the result of word tokenizer (wordcut::put_delimiters).
I did the same on wordcut.build_path(txt, &txt.chars().collect::>()). The REPL printed a graph that used for word tokenization.
I could let the REPL show prettier result by evaluating dbg!(wordcut.build_path(txt, &txt.chars().collect::>()));.
I redefined txt to กากินกิน, and ran the word tokenizer again.
This allows me to experiment with and manually test functions by varying the input variables to different strings and checking the results in simple strings or even internal structures, without recompiling the program or reloading the data. This is helpful for manual testing and debugging.
Compile-time type checking is a great way to catch errors early, but it is not a guarantee of correctness. Even simple subroutines can be incorrect. For example, is_old_enough is a subroutine for checking whether a person is at least 21 years old.
Adding an equals sign (=) to the code changes the behavior of the subroutine, even though the code is still type-safe. The similar bug is found in Servo, but the type was integer.
Testing the entire program manually or programmatically is essential, but it can be difficult to catch all errors, especially those hidden in the details. Testing subroutines is important because it allows testers to focus on small, well-defined units of code. This makes it easier to identify and fix errors. Here are three prerequisites for testing subroutines:
Defining subroutines
An input environment for testing
Result validation
Defining subroutines
Some programming languages encourage programmers to define subroutines more than others. This is because some languages have features that make it easier and more natural to define and use subroutines.
Defining subroutines in BASIC programming language
In the 1970s, to define a subroutine in BASIC, you would assign it a line number and use the RETURN statement.
1000 PRINT "SUBROUTINE"
1100 RETURN
We can call a subroutine in a program using the GOSUB command, followed by the line number of the subroutine.
GOSUB 1000
Defining a subroutine in BASIC is as simple as using the GOTO statement, but with the added convenience of being able to return to the calling code.
Defining subroutines in Common Lisp
In Common Lisp, a function is a subroutine that always returns a value when it is called with a specific set of inputs. This Common Lisp code processes a-person, which is a member of the list people one-by-one using the DOLIST command. If a-person is at least 21 years old, the program will print it out.
We can create a new function from the part (> (person-age a-person) 20) by using the DEFUN command, with a function name - old-enough?, and an input variable, which is a-person.
Common Lisp encourages programmers to create subroutines by making it easy to copy and paste parts of code, which are also known as expressions, or forms.
Defining subroutines in Java
Here is a Java version of a print-a-person-if-at-least-21 program. Java uses the for loop instead of the Common Lisp DOLIST command.
In addition to Common Lisp, Java requires type annotations for functions. The function is_old_enough was annotated as a function that takes a Person as input and returns a boolean. Moreover, In Java, programmers must decide whether a function belongs to a class or an object by using the static keyword. In Java, programmers also use the private and public keywords to control access to functions. Java functions always require a return statement, similar to BASIC subroutines, except for functions that do not return any value.
Java encourages programmers to create subroutines, but with more annotations, it is not as encouraging as Common Lisp.
Defining subroutines in Crystal: Static typing doesn't mean more annotations.
My explanation of Java, a statically typed programming language, may have led to the misconception that statically typed languages require more annotations. Crystal - another statically typed programming language is the counter example. Here is a Crystal version of a print-a-person-if-at-least-21 program. Instead of the DOLIST command, Crystal uses the EACH command.
To create a function, we can copy the expression a_person.age > 20, and paste it into DEF ... END block, without any type annotations or any RETURN statement.
defold_enough?(a_person)a_person.age>20end
We can substitute the expression a_person.age > 20 with a function call old_enough?(a_person).
Surprisingly, the Rust version of is_old_enough looks similar to the Crystal version, but with type annotations. Type annotation in Rust is more complicated than in Java because Rust has references and programmers need to think about the lifetime of variables. Type annotations and lifetimes could make it more difficult for programmers to write subroutines in Rust.
Type annotations make definitions precise and easier to read, but they require more work, can be distracting, and do not help encouraging a programming to create a subroutine.
Preparing an environment for calling a subroutine
Some programming language features and software design can make preparing the environment for calling a subroutine difficult. Moreover, maintaining the code used for preparing the environment could require unnecessary work if the code is too coupled with data structures, which are usually changed.
Preparing an environment in Common Lisp and JavaScript
The variable a-person is an environment for calling the function old-enough?. We create a data structure from a struct in Common Lisp by calling a function make-*. In this example, we call a function make-person.
(make-person:name"A":age30)
Moreover, we can make a data structure from a struct using #S syntax, which is in the same form as it is printed.
#S(PERSON:NAME"A":AGE30)
This #S syntax is very useful when we have existing data structures, because it allows us to use printed data structures to prepare the environment later. This is especially helpful when we want to build long or complex data structures, such as a list of 1,000 people.
In JavaScript, we can prepare data structures in a similar way to Common Lisp, but without specifying the types of the data.
{"name":"A","age":30}
Like Common Lisp, JavaScript can dump data structures to JSON format using the JSON.stringify() command.
It is easy to prepare a data structure as an environment for calling Common Lisp and JavaScript functions, especially because we can reuse the format that a data structure was dumped from memory.
Preparing an environment in Java and Rust
In Java, we create a data structure by instantiating a class using the new keyword. The arguments, which are the input values for creating an object, are sent in a strict order without any keywords, such as :name and :age seen in the Common Lisp example. This style should be fine when the number of arguments does not exceed three.
vara_person=newPerson("A",30);
We can call the function is_old_enough, which in Java is a class method.
is_old_enough(a_person)
Alternatively, we can define the function is_old_enough as an object method, and then call it with this syntax.
a.is_old_enough()
Still, the method for preparing the person data structure remains the same. So class methods are not necessarily easier to test than object methods.
In Rust, we create a data structure with the similar syntax to Rust. However, Rust has a more step, which is converting &str to String using the function to_string.
Person{name:"A".to_string(),age:30}
Although both Java and Rust cannot use printed format for creating data structure directly. We can use JSON library to dump and load
data.
So, preparing an environment in Java and Rust is not as convenient as Common Lisp or JavaScript, since we cannot copy printed data structure, and directly use it in the program without a help of an additional library.
The difficulty in preparing the environment is caused by the software design.
Sometimes preparing the environment is difficult because of the software design. To create a Person object in this example, we must pass in the person's name and a service that can return their age.
So, we cannot prepare a person data structure with a specific age without creating a service, which is remotely related to test the function is_old_enough.
Using basic data structure
Instead of defining a class or a struct, we can use a list for representing personal data.
'(:name"A":age30)
Using a list removes unnecessary restrictions on creating a person, even though our design is primarily to get a person from a service. Here is an example of calling a function to obtain a person data structure from a service.
(get-person"A"service)
In JavaScript, we can create an object, which is idiomatic for JavaScript, instead of a list.
{"name":"A","age":30}
In Java, we use HashMap although creating HashMap in Java does not look as concise as list in Common Lisp.
However, using a list or other basic data structure also has a downside, which will be explained later.
Modifying the data structure affects the code for preparing an environment.
Given, we added reward to the struct person.
structPerson{name:String,age:u32,reward:u32,}
This code for creating a Person data structure would be broken.
Person{name:"A".to_string(),age:10}
We have to create a data structure by passing a reward value.
Person{name:"A".to_string(),age:10,reward:800}
It may seem trivial, but I've never enjoyed fixing repetitive code in tests.
Use default values for values we don't care about.
In Rust, we can create a data structure with default values, and then we assigned only a value that we care.
letmuta_person=Person::default();a_person.age=30
Before we use the function default, we put #[derive(Default)] before the struct definition.
We can use a list instead of a specific struct, and in a list, we can put only :age with other values. Still, we can run the test.
(setqa-person'(:age30))(old-enough?a-person)
Using basic data structures has some downsides. Lists and hash tables do not perform as well as structs, because accessing struct member is very fast. The position of each struct member in memory is calculated arithmetically. Moreover, when everything is a list, a compiler cannot help checking types since their types are the same. A programmer may have no idea how the data structure looks like by looking a function definition. Still, we alleviate solve these problems by using a runtime schema such as JSON Schema.
Preparing an environment for async function and database connection is not convenient
Some subroutines need a database connection to establish. Some subroutines need an async event loop to run before testing, for example, async functions in Rust. Preparing a fake database and connecting the everything before testing is inconvenient, especially for testing a function like is_old_enough?, which can be fixed by improving the software design. Testing async functions become easier by using a tool, such as Tokio::test.
Testing a subroutine in the production environment
Testing in the production environment is not preferable, but sometimes it is necessary, especially when we cannot reproduce the problem somewhere else.
Common Lisp can run Read-Eval-Print Loop (REPL) along with the production, so we can always test subroutines. Many languages come with an REPL, but we have to make sure that libraries and frameworks play well the REPL. In Common Lisp community, libraries and frameworks are usually REPL-friendly.
Result validation
After running a subroutine, we usually want to validate the result either manually or programatically.
Programatical validation
Most data comparison functions check if the data is the same object in memory, which is not what we want in this case. The code below does not return true even if the content of the data structures is the same because the EQ function does not compare the content.
When testing, we usually want to compare data structures for content equality. In Common Lisp, we can use the EQUALP function to do this, instead of the EQ function.
Manually validating a complex data structure can be difficult, so there are many tools that can display data in a structured view. In Common Lisp, we can use Emacs inspectors like slime-inspect and sly-inspect, or we can use Clouseau, which is part of McCLIM. For other programming languages, I typically convert data structures to JSON and view them in Firefox.
Public Id As String
Public Title As String
Public Author As String
Public Sub PrintObject()
Print Id, Title, Author
End
Static Public Sub Info()
Print "หนังสือเป็นสื่อ"
End
สัญญาอนุญาตก็คืออนุญาตอย่างน้อยให้ทำตามเสรีภาพที่ควรมีของซอฟต์แวรเสรีคืออนุญาตให้ใช้งาน ทำซ้ำ เผยแพร่ เช่น บางส่วนของสัญญาอนุญาตแบบ Expat/MIT «Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:» จะเห็นว่ามีคำว่า use copy modify publish และ distribute ด้วย
คนทั่วไปยิ่งไม่อยากทำงานให้คู่แข่งในทางต่าง ๆ ได้เปรียบด้วย ซึ่งสำหรับซอฟต์แวร์เสรีการที่คู่แข่งแก้ไขปรับปรุงแล้วเอาขายหรือใช้ในกิจการแต่ไม่เผยแพร่รหัสต้นฉบับ (source code) สู่สาธารณะได้สามารถมองว่าเป็นการเอาเปรียบคนอื่นจนเกินไปได้ สัญญาอนุญาตบางแบบโดย เช่น GNU General Public License (GPL) สร้างมาเพื่อจัดการประเด็นนี้โดยมีข้อความส่วนหนึ่งว่า “You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:” ซึ่งมีเงื่อนไขให้เผยแพร่รหัสต้นฉบับที่แก้ไขแล้ว ตัวอย่างซอฟร์แวร์ที่ใช้สัญญาอนุญาต GPL เช่น Linux Wordpress VLC Blender ซึ่งเป็นซอฟต์แวร์ที่ได้รับความนิยม และโดยเฉพาะ Linux มีเอกชนหลายเจ้าช่วยกันพัฒนา
อย่างไรก็ตามในยุคที่ใช้งานผ่านเครือข่ายก็มีการอ้างว่าไม่ได้เผยแพร่โปรแกรม จึงไม่ต้องแจกจ่ายรหัสต้นฉบับจึงมีสัญญาอนุญาต GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) ซึ่งมีข้อความส่วนหนึ่งว่า “Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.” เป็นเงื่อนไขว่าต้องแจกจ่ายรหัสต้นฉบับเมื่อมีนำโปรแกรมที่แก้ไขปรับปรุงไปให้ใช้ผ่านเครือข่ายคอมพิวเตอร์ ตัวอย่างซอฟต์แวร์ที่ใช้สัญญาอนุญาต AGPL เช่น Mastodon Nextcloud OnlyOffice ทั้งหมดเป็นโปรแกรมสำหรับใช้งานผ่านระบบเครือข่าย
สิทธิบัตร
ประเทศไทยยังไม่สิทธิบัตรซอฟต์แวร์ แต่ไม่ใช่ทุกคนอยู่ในประเทศไทยหรือจะอยู่ในประเทศไทยตลอดเวลา จึงต้องคำนึงถึงสิทธิบัตรด้วย เพราะไม่ละเมิดลิขสิทธิ์แต่ละเมิดสิทธิบัตรก็เสียทรัพย์ได้ ดังนั้นสัญญาอนุญาต GPL รุ่นที่ 3 จึงมีข้อความเกี่ยวกับการยุติสิทธิบัตรและค่าสินไหมด้วย «A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License» เช่น เดียวกับสัญญาอนุญาต Apache รุ่นที่ 2 ก็มีความลักษณะคล้ายกันว่า “Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work”
ส่วนที่ไม่ใช่โปรแกรม
สัญญาอนุญาตที่เป็นที่นิยมใช้กับเอกสาร รูปภาพ วิดีโอ และเพลงโดยเฉพาะคือสัญญาอนุญาตครีเอทีฟคอมมอนส์ ซึ่งเขียนย่อว่า CC โดยมีเงื่อนไขย่อยให้เลือกคือ BY คือให้เครดิตว่าเจ้าของผลงาน SA คือหากมีการแก้ไขปรับปรุงต้องเผยแพร่งานที่แก้ไขในสัญญาอนุญาตแบบเดียวกัน NC คือห้ามใช้เพื่อการค้า ND คือห้ามดัดแปลง ซึ่งจะเป็นได้ว่าเงื่อนไข NC และ ND ขัดกับหลักการซอฟต์แวร์เสรี โครงการที่ใช้สัญญาอนุญาตครีเอทีฟคอมมอนส์ เช่น วิกิพีเดีย
สัญญาอนุญาตครีเอทีฟคอมมอนส์ถึงแม้จะมีบางบางที่เข้ากันไม่ได้กับซอฟต์แวร์เสรี แต่ก็เป็นการให้สิทธิเป็นการทั่วไปกับสาธารณะกล่าวคือทุกคนได้รับสิทธิ ต่างจากข้อตกลงที่ของบริการโซเชียลมีเดียหลายรายที่ผู้ใช้จะใช้งานได้ก็ต่อเมื่อยอมรับข้อตกลงที่ให้สิทธิแพลตฟอร์มนำผลงานไปใช้ ประมวลผล แก้ไขดัดแปลง เผยแพร่ หรือแม้แต่อนุญาตคนอื่นต่อ
เนื่องจากมิตรสหายหลายท่านกำลังคุยกันเรื่องกฎหมายปัญญาประดิษฐ์ ในส่วนที่เกี่ยวกับงานอันมีลิขสิทธิ์และ generative AI (อันเนื่องมาจากความนิยมของแอปแต่งภาพ Loopsie) และพูดถึงพัฒนาการของร่าง AI Act ในสหภาพยุโรป จึงขออนุญาตเล่ากระบวนการที่มาที่ไปโดยรวมของการออกฎหมายในสหภาพยุโรป รวมถึงที่มาของอำนาจในแต่ละสถาบันของสหภาพยุโรป เพื่อจะได้เข้าใจว่าร่างกฎหมายดังกล่าว ซึ่งปัจจุบันมีอยู่ 3 ร่าง* มีที่มาที่ไปอย่างไร (ซึ่งอาจเกี่ยวข้องกับจุดยืนของแต่ละร่าง)
*ร่างริเริ่มข้อเสนอจาก European Commission (21 เม.ย. 2021), ร่าง provisional position จาก Council of the European Union (6 ธ.ค. 2022), และร่าง position ของ European Parliament (14 มิ.ย. 2023)
Council of the European Union เป็นตัวแทนของรัฐบาลแต่ละประเทศสมาชิก (ซึ่งมักจะเป็นรัฐมนตรีประจำกระทรวงที่เกี่ยวข้องกับเรื่องที่กำลังพิจารณา จึงเป็นที่มาของอีกชื่อ ว่า Council of Ministers)
European Council ประกอบด้วยประมุขของแต่ละประเทศสมาชิก
—
European Council*,** ไม่มีอำนาจโดยตรงในการออกกฎหมาย แต่มีบทบาทสำคัญในการวางแนวนโยบาย ซึ่งก็มีอิทธิพลต่อการตัดสินใจของสถาบันอื่นๆ
(* เนื่องจาก European Council ไม่มีอำนาจในการออกกฎหมาย ถ้าเจอคำว่า Council ในบริบทการพิจารณากฎหมาย มีแนวโน้มจะเป็น Council of the European Union มากกว่า
** มีอีกหน่วยงานที่ชื่อคล้ายกันคือ Council of Europe ซึ่งไม่ได้เป็นสถาบันในสหภาพยุโรป)
(*กระบวนการปรึกษาหารือแบบสามฝ่ายนี้ไม่มีกำหนดอยู่ในสนธิสัญญาของสหภาพยุโรป แต่เป็นวิธีการทำงานอย่างไม่เป็นทางการที่เกิดสำหรับกฎหมายบางฉบับที่มีความซับซ้อนหรือมีความครอบคลุมกว้างขวาง ที่จำเป็นต้องเอา position มาพูดคุยกันนอกรอบบ่อยครั้งกว่าปกติเพื่อให้ทำงานได้เร็วขึ้น แต่เนื่องจากกระบวนการปรึกษาหารือนอกรอบนี้อาจทำกันโดยไม่จำเป็นต้องเปิดเผยต่อสาธารณะ [เนื่องจากเป็นกระบวนการแบบไม่เป็นทางการ] จึงถูกวิจารณ์ว่าเสี่ยงต่อความไม่โปร่งใส ต่อมาทางรัฐสภายุโรปจึงได้ออก Parliament Rules of Procedure เพื่อทำให้กระบวนการดังกล่าวนั้นยังโปร่งใสและสะท้อนความเป็นตัวแทนของพลเมืองยุโรป)
จะเห็นว่ากระบวนการพิจารณากฎหมายมาใช้จะต้องทำร่วมกันระหว่าง Parliament และ Council ในฐานะตัวแทนของสหภาพและของประเทศสมาชิก – ถ้าไปดูชื่อกฎหมายต่างๆ อย่างเป็นทางการ จะเห็นว่าใช้ชื่อว่า (Regulation/Directive) “of the European Parliament and of the Council” คือต้องมาคู่กัน
ส่วนถ้าเป็นประเภท recommendation ประเทศสมาชิกจะเลือกทำหรือไม่ทำตามก็ได้ recommendation จำนวนมากจะมาจาก Commission แต่สถาบันอื่นอย่าง Parliament, Council และ European Central Bank ก็ออก recommendation ได้เช่นกัน
และสุดท้ายคือประเภท opinion ก็ตามชื่อ คือไม่เชิงเป็นการแนะนำว่าควรทำหรือไม่ควรทำอะไร ด้วยขั้นตอนอย่างไร (แบบ recommendation) แต่เป็นการออกความเห็นเกี่ยวกับเรื่องหนึ่งๆ และเนื่องจากจริงๆ มันไม่ได้มีลักษณะเป็นกฎหมาย สถาบันหลักทั้ง Commission, Council, Parliament ออก opinion ได้เองทั้งหมด (ไม่ต้องมีสถาบันอื่นร่วม) เช่นเดียวกับ Committee of the Regions และ European Economic and Social Committee ก็ออก opinion ได้เช่นกัน
ร่างนี้เป็น “provisional position” (ชั่วคราว/เฉพาะกาล) จาก Council of the European Union ก่อนที่กระบวนการ first reading จริงๆ ของ Council จะเกิดขึ้น — ร่างนี้มีชื่อเล่นว่า “General approach”
คำว่า generative AI นั้นปรากฏอยู่ในร่างของ Council 2 แห่ง คือ Recital 6 และ Article 3 (1)
ปัจจุบันร่าง AI Act ของสหภาพยุโรปเพิ่งจะผ่านขั้น first reading ของ Parliament ไป
หลังจากนี้จะเป็น first reading ของ Council ซึ่งถ้า Council รับทั้งร่างโดยไม่มีการแก้ไขเลย กฎหมายก็จะออกมาได้ทันที
จะเห็นว่า แม้จะยังไม่ถึงขั้นตอน first reading อย่างเป็นทางการของ Council แต่ทาง Council ก็มี provisional position ออกมาแล้ว ซึ่งก็มีอิทธิพลต่อการทำ position ของ Parliament (ทำให้ position ของ Parliament ลู่เข้าหาจุดยืนของทาง Council มากขึ้น) และเป็นประโยชน์ต่อการพูดคุยสามฝ่ายอย่างไม่เป็นทางการ ซึ่งอาจทำให้กระบวนการต่างๆ ไม่จำเป็นจะต้องไปถึงขั้น Conciliation
My programs have defects because some functions destroy or mutate shared data. I avoid mutating shared data in Common Lisp; for example, I use CONS. However, I made a mistake by using SORT, which wasn't aware that SORT is destructive. Sometimes I still forget that SORT mutates its input data. The function sort-snode below sorts a part of a tree.
Since it is based on SORT, it changes the input data, which is snode. After I found the output tree didn't look what I expected, I took days to see where my mistake was.
My workaround is running COPY-LIST before SORT, as shown below.
Submitted to the AI Incident Database on 25 October 2022 (my first time!). Based on a report by Thai PBS on 4 October 2019, with information from additional sources. Appeared in the database on 26 October 2022. I will keep the extended report here for archival purpose. For a concise report and citation, please link to Incident 375 in the AI Incident Database.
—
Lots of Thais cannot register for the government cash handout scheme as the app managing government wallet failed to recognize their faces during the authentication process. People entitled to the handout have to wait for a very long queue at their local ATMs instead to get authenticated.
The handout is limited to 10 million recipients in the first round and a recipient has to register to claim it. Thailand has almost 70 million population. The registration involved the authentication process of photo taking the citizen identification card and the face of the card holder. If not successful, the citizen can do it at a supported ATM machine. There are about 3,000 ATMs that support the process nationwide.
On social media, internet users share the problems they had, screenshots of messages from the app, and also photo taking techniques that may pleased the facial recognition. The tips include applying face powder makeup, put the hair up, take off eyeglasses, take the photo during daytime in the sunlight, look straight, make the face and comb the hair to match one in the ID card, and avoid having shadow on the ID card.
Elder people are one of the groups that suffer the most from the facial recognition issue, as their current faces can be more different from ones in their ID cards. Thai citizen ID card law said people who are 70 years old or more are no longer need to renew their cards, which normally requires to be renewed every eight years. Because of this, the face on the ID card and the actual face can be very different.
Background
The cash handout program, called “Chim, Shop, Chai” (ชิมช้อปใช้ roughly translated as “eat, buy, spend”) is aimed to promote domestic tourism.
The same government wallet, “G Wallet”, will be used later for many other rounds of cash handout and co-pay programs to come during the COVID-19 pandemic, such as “Khon La Khrueng” (คนละครึ่ง, “each pay half”) – where the same facial recognition issue still occurs.
Total number of individuals registered for these financial support programs is around 26.5 million. The wallet itself is inside “Paotang” (เป๋าตัง) super app, developed and managed by a state enterprise Krungthai Bank. Paotang has 34 million active users in June 2022.
Suriyawongkul, Arthit. (2019-09-29) Incident Number 375. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative. Retrieved on October 26, 2022 from incidentdatabase.ai/cite/375.
AI for All ชุดโครงการปัญญาประดิษฐ์/วิทยาการหุ่นยนต์สำหรับทุกคน ได้รับการสนับสนุนจากสำนักงานสภานโยบายการอุดมศึกษา วิทยาศาสตร์ วิจัยและนวัตกรรมแห่งชาติ (สอวช.) มีส่วนของบทความและสรุปข่าวประจำเดือน ซึ่งเนื้อหาบางส่วนเกี่ยวข้องกับประเด็นข้อกังวลและกรอบกติกาเมื่อนำระบบปัญญาประดิษฐ์มาใช้ในสังคม
Many teams don't test their functions separately. They run the whole project to see the result. When something goes wrong, they check the log and use a debugger. They are the majority, at least from my experience. Static type checking and type annotations are efficient for these teams because type annotations give a rough idea about data for each function. They can't look at testing data, which doesn't exist.
Still, I wonder if forcing type annotation is practical. Reading a long function is difficult. By splitting a long function, I found that type annotation can be distracting because instead of focusing on logic, I have to think about type annotation; sometimes, the size of type annotation is about half the function's size. Maybe forcing type annotation only on functions, which an outsider from another module can call, like in OCaml, is practical. I haven't coded in OCaml beyond some toy programs. So I don't know if it is really practical as I imagine.
My computer has many cores but doesn't have enough RAM to store the whole data. So I usually need to process data from a stream parallelly, for example, reading a Thai text line by line and dispatch them to processors running on each core.
In Clojure, I can use pmap because pmap works on a lazy sequence. However, using Lparallel's pmap on Common Lisp with lazy sequences wasn't in the example. So I used Bordeaux threads and ChanL channels instead. It worked. Still, I had to write repetitive code to handle threads and channels whenever I wanted to process the stream parallelly. It didn't only look messy, but it came with many bugs.
So I created a small library, called stream-par-procs, to wrap threads and channels management. As shown in the diagram, the reader reads a line for the stream, the system dispatch a line to different processors, and finally, the collector creates the final result.
I only need to define a function to be run in the processor, another function for running in the collector, and other functions for initializing states. It hides details, for example, joining collector thread when every processor sent END-OF-STREAM.
In brief, stream-par-procs makes processing a stream parallelly in Common Lisp more convenient with hopefully fewer bugs by reusing threads and channels management code.
Fetching data from REST APIs or a plain HTTP server is usually a part of my work. Many problems may arise during fetching data; for example, a server goes offline, my storage is out of space, an API gives the wrong URL, a server sends data in an invalid format, etc. To illustrate the situation, I can oversimplify my task to the function below:
deffetch(uri):requests.get(uri)
A fetcher usually faces one of these problems. I don't want to rerun a fetcher after running it for an hour or a day. Maybe I can write my fetcher to resume working after it stops at any state, but it will take time, and my fetcher will be more complex. If I code in Python, the fetch function can raise many exceptions, and my fetcher will stop because my code didn't handle any exceptions.
I see two types of environments that will help me fix the problem.
Type checker may help me handle as many types of exceptions as possible upfront. Java definitely can do this because, in Java, I have to declare the list of exceptions as a part of the fetch function, and Javac checks if the caller handles these exceptions. Rust doesn't use a Java-like exception, but the result type encodes the types of errors, so the Rust compiler can check if a caller handles these errors.
Common Lisp runtime will run a debugger if unhandled errors occur. So I can choose what to do next. For example, I can empty my trash folder if my storage is out of space. I still don't know if I can handle the storage space problem later with Erlang runtime.
Python doesn't seem to belong to (1) nor (2). I tried mypy, a static type checker for Python, but it didn't warn me anything. And I don't know how to retry or do something else with Python's exception system rather than let it stop.