mirror of
https://github.com/krahets/hello-algo.git
synced 2025-01-23 06:00:27 +08:00
build
This commit is contained in:
parent
f7cbcbbd75
commit
c458348df2
@ -417,7 +417,7 @@ $$
|
||||
|
||||
<p align="center"> 图 14-14 暴力搜索递归树 </p>
|
||||
|
||||
每个状态都有向下和向右两种选择,从左上角走到右下角总共需要 $m + n - 2$ 步,所以最差时间复杂度为 $O(2^{m + n})$ 。请注意,这种计算方式未考虑临近网格边界的情况,当到达网络边界时只剩下一种选择,因此实际的路径数量会少一些。
|
||||
每个状态都有向下和向右两种选择,从左上角走到右下角总共需要 $m + n - 2$ 步,所以最差时间复杂度为 $O(2^{m + n})$ ,其中 $n$ 和 $m$ 分别为网格的行数和列数。请注意,这种计算方式未考虑临近网格边界的情况,当到达网络边界时只剩下一种选择,因此实际的路径数量会少一些。
|
||||
|
||||
### 2. 方法二:记忆化搜索
|
||||
|
||||
|
@ -6,7 +6,7 @@ comments: true
|
||||
|
||||
!!! tip
|
||||
|
||||
阅读本节前,请确保已学完“堆“章节。
|
||||
阅读本节前,请确保已学完“堆”章节。
|
||||
|
||||
<u>堆排序(heap sort)</u>是一种基于堆数据结构实现的高效排序算法。我们可以利用已经学过的“建堆操作”和“元素出堆操作”实现堆排序。
|
||||
|
||||
|
@ -501,7 +501,7 @@ comments: true
|
||||
val: int = self._front.val # 暂存头节点值
|
||||
# 删除头节点
|
||||
fnext: ListNode | None = self._front.next
|
||||
if fnext != None:
|
||||
if fnext is not None:
|
||||
fnext.prev = None
|
||||
self._front.next = None
|
||||
self._front = fnext # 更新头节点
|
||||
@ -510,7 +510,7 @@ comments: true
|
||||
val: int = self._rear.val # 暂存尾节点值
|
||||
# 删除尾节点
|
||||
rprev: ListNode | None = self._rear.prev
|
||||
if rprev != None:
|
||||
if rprev is not None:
|
||||
rprev.next = None
|
||||
self._rear.prev = None
|
||||
self._rear = rprev # 更新尾节点
|
||||
|
@ -504,6 +504,8 @@ comments: true
|
||||
P->left = n2;
|
||||
// 删除节点 P
|
||||
n1->left = n2;
|
||||
// 释放内存
|
||||
delete P;
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
@ -609,6 +611,8 @@ comments: true
|
||||
P->left = n2;
|
||||
// 删除节点 P
|
||||
n1->left = n2;
|
||||
// 释放内存
|
||||
free(P);
|
||||
```
|
||||
|
||||
=== "Kotlin"
|
||||
|
@ -6,10 +6,10 @@ comments: true
|
||||
|
||||
In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
1. **Finding a Solution to the Problem**: The algorithm should reliably find the correct solution within the stipulated range of inputs.
|
||||
1. **Finding a Solution to the Problem**: The algorithm should reliably find the correct solution within the specified range of inputs.
|
||||
2. **Seeking the Optimal Solution**: For the same problem, multiple solutions might exist, and we aim to find the most efficient algorithm possible.
|
||||
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating an algorithm, which includes the following two dimensions.
|
||||
|
||||
- **Time efficiency**: The speed at which an algorithm runs.
|
||||
- **Space efficiency**: The size of the memory space occupied by an algorithm.
|
||||
@ -20,11 +20,11 @@ There are mainly two methods of efficiency assessment: actual testing and theore
|
||||
|
||||
## 2.1.1 Actual testing
|
||||
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms, monitor and record their runtime and memory usage. This assessment method reflects the actual situation, but it has significant limitations.
|
||||
|
||||
On one hand, **it's difficult to eliminate interference from the testing environment**. Hardware configurations can affect algorithm performance. For example, algorithm `A` might run faster than `B` on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.
|
||||
On one hand, **it's difficult to eliminate interference from the testing environment**. Hardware configurations can affect algorithm performance. For example, an algorithm with a high degree of parallelism is better suited for running on multi-core CPUs, while an algorithm that involves intensive memory operations performs better with high-performance memory. The test results of an algorithm may vary across different machines. This means testing across multiple machines to calculate average efficiency becomes impractical.
|
||||
|
||||
On the other hand, **conducting a full test is very resource-intensive**. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.
|
||||
On the other hand, **conducting a full test is very resource-intensive**. Algorithm efficiency varies with input data size. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but with larger data volumes, the test results may be the opposite. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires excessive computational resources.
|
||||
|
||||
## 2.1.2 Theoretical estimation
|
||||
|
||||
@ -34,19 +34,20 @@ Complexity analysis reflects the relationship between the time and space resourc
|
||||
|
||||
- "Time and space resources" correspond to <u>time complexity</u> and <u>space complexity</u>, respectively.
|
||||
- "As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied, but on the "rate" at which time or space increases.
|
||||
|
||||
**Complexity analysis overcomes the disadvantages of actual testing methods**, reflected in the following aspects:
|
||||
|
||||
- It does not require actually running the code, making it more environmentally friendly and energy efficient.
|
||||
- It is independent of the testing environment and applicable to all operating platforms.
|
||||
- It can reflect algorithm efficiency under different data volumes, especially in the performance of algorithms with large data volumes.
|
||||
|
||||
!!! tip
|
||||
|
||||
If you're still confused about the concept of complexity, don't worry. We will introduce it in detail in subsequent chapters.
|
||||
If you're still confused about the concept of complexity, don't worry. We will cover it in detail in subsequent chapters.
|
||||
|
||||
Complexity analysis provides us with a "ruler" to measure the time and space resources needed to execute an algorithm and compare the efficiency between different algorithms.
|
||||
Complexity analysis provides us with a "ruler" to evaluate the efficiency of an algorithm, enabling us to measure the time and space resources required to execute it and compare the efficiency of different algorithms.
|
||||
|
||||
Complexity is a mathematical concept and may be abstract and challenging for beginners. From this perspective, complexity analysis might not be the best content to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.
|
||||
Complexity is a mathematical concept that might be abstract and challenging for beginners. From this perspective, complexity analysis might not be the most suitable topic to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.
|
||||
|
||||
In summary, it's recommended that you establish a preliminary understanding of complexity analysis before diving deep into data structures and algorithms, **so that you can carry out simple complexity analyses of algorithms**.
|
||||
In summary, it is recommended to develop a basic understanding of complexity analysis before diving deep into data structures and algorithms, **so that you can perform complexity analysis on simple algorithms**.
|
@ -4,29 +4,29 @@ comments: true
|
||||
|
||||
# 10.5 Search algorithms revisited
|
||||
|
||||
<u>Searching algorithms (searching algorithm)</u> are used to search for one or several elements that meet specific criteria in data structures such as arrays, linked lists, trees, or graphs.
|
||||
<u>Searching algorithms (search algorithms)</u> are used to retrieve one or more elements that meet specific criteria within data structures such as arrays, linked lists, trees, or graphs.
|
||||
|
||||
Searching algorithms can be divided into the following two categories based on their implementation approaches.
|
||||
Searching algorithms can be divided into the following two categories based on their approach.
|
||||
|
||||
- **Locating the target element by traversing the data structure**, such as traversals of arrays, linked lists, trees, and graphs, etc.
|
||||
- **Using the organizational structure of the data or the prior information contained in the data to achieve efficient element search**, such as binary search, hash search, and binary search tree search, etc.
|
||||
- **Using the organizational structure of the data or existing data to achieve efficient element searches**, such as binary search, hash search, binary search tree search, etc.
|
||||
|
||||
It is not difficult to notice that these topics have been introduced in previous chapters, so searching algorithms are not unfamiliar to us. In this section, we will revisit searching algorithms from a more systematic perspective.
|
||||
These topics were introduced in previous chapters, so they are not unfamiliar to us. In this section, we will revisit searching algorithms from a more systematic perspective.
|
||||
|
||||
## 10.5.1 Brute-force search
|
||||
|
||||
Brute-force search locates the target element by traversing every element of the data structure.
|
||||
A Brute-force search locates the target element by traversing every element of the data structure.
|
||||
|
||||
- "Linear search" is suitable for linear data structures such as arrays and linked lists. It starts from one end of the data structure, accesses each element one by one, until the target element is found or the other end is reached without finding the target element.
|
||||
- "Breadth-first search" and "Depth-first search" are two traversal strategies for graphs and trees. Breadth-first search starts from the initial node and searches layer by layer, accessing nodes from near to far. Depth-first search starts from the initial node, follows a path until the end, then backtracks and tries other paths until the entire data structure is traversed.
|
||||
- "Linear search" is suitable for linear data structures such as arrays and linked lists. It starts from one end of the data structure and accesses each element one by one until the target element is found or the other end is reached without finding the target element.
|
||||
- "Breadth-first search" and "Depth-first search" are two traversal strategies for graphs and trees. Breadth-first search starts from the initial node and searches layer by layer (left to right), accessing nodes from near to far. Depth-first search starts from the initial node, follows a path until the end (top to bottom), then backtracks and tries other paths until the entire data structure is traversed.
|
||||
|
||||
The advantage of brute-force search is its simplicity and versatility, **no need for data preprocessing and the help of additional data structures**.
|
||||
The advantage of brute-force search is its simplicity and versatility, **no need for data preprocessing or the help of additional data structures**.
|
||||
|
||||
However, **the time complexity of this type of algorithm is $O(n)$**, where $n$ is the number of elements, so the performance is poor in cases of large data volumes.
|
||||
However, **the time complexity of this type of algorithm is $O(n)$**, where $n$ is the number of elements, so the performance is poor with large data sets.
|
||||
|
||||
## 10.5.2 Adaptive search
|
||||
|
||||
Adaptive search uses the unique properties of data (such as order) to optimize the search process, thereby locating the target element more efficiently.
|
||||
An Adaptive search uses the unique properties of data (such as order) to optimize the search process, thereby locating the target element more efficiently.
|
||||
|
||||
- "Binary search" uses the orderliness of data to achieve efficient searching, only suitable for arrays.
|
||||
- "Hash search" uses a hash table to establish a key-value mapping between search data and target data, thus implementing the query operation.
|
||||
@ -34,7 +34,7 @@ Adaptive search uses the unique properties of data (such as order) to optimize t
|
||||
|
||||
The advantage of these algorithms is high efficiency, **with time complexities reaching $O(\log n)$ or even $O(1)$**.
|
||||
|
||||
However, **using these algorithms often requires data preprocessing**. For example, binary search requires sorting the array in advance, and hash search and tree search both require the help of additional data structures, maintaining these structures also requires extra time and space overhead.
|
||||
However, **using these algorithms often requires data preprocessing**. For example, binary search requires sorting the array in advance, and hash search and tree search both require the help of additional data structures. Maintaining these structures also requires more overhead in terms of time and space.
|
||||
|
||||
!!! tip
|
||||
|
||||
@ -42,13 +42,13 @@ However, **using these algorithms often requires data preprocessing**. For examp
|
||||
|
||||
## 10.5.3 Choosing a search method
|
||||
|
||||
Given a set of data of size $n$, we can use linear search, binary search, tree search, hash search, and other methods to search for the target element from it. The working principles of these methods are shown in Figure 10-11.
|
||||
Given a set of data of size $n$, we can use a linear search, binary search, tree search, hash search, or other methods to retrieve the target element. The working principles of these methods are shown in Figure 10-11.
|
||||
|
||||
![Various search strategies](searching_algorithm_revisited.assets/searching_algorithms.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 10-11 Various search strategies </p>
|
||||
|
||||
The operation efficiency and characteristics of the aforementioned methods are shown in the following table.
|
||||
The characteristics and operational efficiency of the aforementioned methods are shown in the following table.
|
||||
|
||||
<p align="center"> Table 10-1 Comparison of search algorithm efficiency </p>
|
||||
|
||||
@ -65,23 +65,23 @@ The operation efficiency and characteristics of the aforementioned methods are s
|
||||
|
||||
</div>
|
||||
|
||||
The choice of search algorithm also depends on the volume of data, search performance requirements, data query and update frequency, etc.
|
||||
The choice of search algorithm also depends on the volume of data, search performance requirements, frequency of data queries and updates, etc.
|
||||
|
||||
**Linear search**
|
||||
|
||||
- Good versatility, no need for any data preprocessing operations. If we only need to query the data once, then the time for data preprocessing in the other three methods would be longer than the time for linear search.
|
||||
- Good versatility, no need for any data preprocessing operations. If we only need to query the data once, then the time for data preprocessing in the other three methods would be longer than the time for a linear search.
|
||||
- Suitable for small volumes of data, where time complexity has a smaller impact on efficiency.
|
||||
- Suitable for scenarios with high data update frequency, because this method does not require any additional maintenance of the data.
|
||||
- Suitable for scenarios with very frequent data updates, because this method does not require any additional maintenance of the data.
|
||||
|
||||
**Binary search**
|
||||
|
||||
- Suitable for large data volumes, with stable efficiency performance, the worst time complexity being $O(\log n)$.
|
||||
- The data volume cannot be too large, because storing arrays requires contiguous memory space.
|
||||
- Not suitable for scenarios with frequent additions and deletions, because maintaining an ordered array incurs high overhead.
|
||||
- Suitable for larger data volumes, with stable performance and a worst-case time complexity of $O(\log n)$.
|
||||
- However, the data volume cannot be too large, because storing arrays requires contiguous memory space.
|
||||
- Not suitable for scenarios with frequent additions and deletions, because maintaining an ordered array incurs a lot of overhead.
|
||||
|
||||
**Hash search**
|
||||
|
||||
- Suitable for scenarios with high query performance requirements, with an average time complexity of $O(1)$.
|
||||
- Suitable for scenarios where fast query performance is essential, with an average time complexity of $O(1)$.
|
||||
- Not suitable for scenarios needing ordered data or range searches, because hash tables cannot maintain data orderliness.
|
||||
- High dependency on hash functions and hash collision handling strategies, with significant performance degradation risks.
|
||||
- Not suitable for overly large data volumes, because hash tables need extra space to minimize collisions and provide good query performance.
|
||||
@ -90,5 +90,5 @@ The choice of search algorithm also depends on the volume of data, search perfor
|
||||
|
||||
- Suitable for massive data, because tree nodes are stored scattered in memory.
|
||||
- Suitable for maintaining ordered data or range searches.
|
||||
- In the continuous addition and deletion of nodes, the binary search tree may become skewed, degrading the time complexity to $O(n)$.
|
||||
- With the continuous addition and deletion of nodes, the binary search tree may become skewed, degrading the time complexity to $O(n)$.
|
||||
- If using AVL trees or red-black trees, operations can run stably at $O(\log n)$ efficiency, but the operation to maintain tree balance adds extra overhead.
|
||||
|
@ -447,7 +447,7 @@ The implementation code is as follows:
|
||||
val: int = self._front.val # Temporarily store the head node value
|
||||
# Remove head node
|
||||
fnext: ListNode | None = self._front.next
|
||||
if fnext != None:
|
||||
if fnext is not None:
|
||||
fnext.prev = None
|
||||
self._front.next = None
|
||||
self._front = fnext # Update head node
|
||||
@ -456,7 +456,7 @@ The implementation code is as follows:
|
||||
val: int = self._rear.val # Temporarily store the tail node value
|
||||
# Remove tail node
|
||||
rprev: ListNode | None = self._rear.prev
|
||||
if rprev != None:
|
||||
if rprev is not None:
|
||||
rprev.next = None
|
||||
self._rear.prev = None
|
||||
self._rear = rprev # Update tail node
|
||||
|
@ -417,7 +417,7 @@ $$
|
||||
|
||||
<p align="center"> 圖 14-14 暴力搜尋遞迴樹 </p>
|
||||
|
||||
每個狀態都有向下和向右兩種選擇,從左上角走到右下角總共需要 $m + n - 2$ 步,所以最差時間複雜度為 $O(2^{m + n})$ 。請注意,這種計算方式未考慮臨近網格邊界的情況,當到達網路邊界時只剩下一種選擇,因此實際的路徑數量會少一些。
|
||||
每個狀態都有向下和向右兩種選擇,從左上角走到右下角總共需要 $m + n - 2$ 步,所以最差時間複雜度為 $O(2^{m + n})$ ,其中 $n$ 和 $m$ 分別為網格的行數和列數。請注意,這種計算方式未考慮臨近網格邊界的情況,當到達網路邊界時只剩下一種選擇,因此實際的路徑數量會少一些。
|
||||
|
||||
### 2. 方法二:記憶化搜尋
|
||||
|
||||
|
@ -6,7 +6,7 @@ comments: true
|
||||
|
||||
!!! tip
|
||||
|
||||
閱讀本節前,請確保已學完“堆積“章節。
|
||||
閱讀本節前,請確保已學完“堆積”章節。
|
||||
|
||||
<u>堆積排序(heap sort)</u>是一種基於堆積資料結構實現的高效排序演算法。我們可以利用已經學過的“建堆積操作”和“元素出堆積操作”實現堆積排序。
|
||||
|
||||
|
@ -501,7 +501,7 @@ comments: true
|
||||
val: int = self._front.val # 暫存頭節點值
|
||||
# 刪除頭節點
|
||||
fnext: ListNode | None = self._front.next
|
||||
if fnext != None:
|
||||
if fnext is not None:
|
||||
fnext.prev = None
|
||||
self._front.next = None
|
||||
self._front = fnext # 更新頭節點
|
||||
@ -510,7 +510,7 @@ comments: true
|
||||
val: int = self._rear.val # 暫存尾節點值
|
||||
# 刪除尾節點
|
||||
rprev: ListNode | None = self._rear.prev
|
||||
if rprev != None:
|
||||
if rprev is not None:
|
||||
rprev.next = None
|
||||
self._rear.prev = None
|
||||
self._rear = rprev # 更新尾節點
|
||||
|
@ -504,6 +504,8 @@ comments: true
|
||||
P->left = n2;
|
||||
// 刪除節點 P
|
||||
n1->left = n2;
|
||||
// 釋放記憶體
|
||||
delete P;
|
||||
```
|
||||
|
||||
=== "Java"
|
||||
@ -609,6 +611,8 @@ comments: true
|
||||
P->left = n2;
|
||||
// 刪除節點 P
|
||||
n1->left = n2;
|
||||
// 釋放記憶體
|
||||
free(P);
|
||||
```
|
||||
|
||||
=== "Kotlin"
|
||||
|
Loading…
Reference in New Issue
Block a user