The progress of space missions has promoted the development of artificial intelligence technology in the aerospace field. Specifically, the deployment of intelligent algorithms on spacecraft to enhance the intelligence capabilities of spacecraft has become the current trend. However, at present, spacecraft lacks sufficient computing power for intelligent algorithms, which severely restricts the development of the aerospace intelligence. This article analyzes the computing power demand of typical aerospace intelligent applications, and introduces the current research status in the field of aerospace intelligent computing. Moreover, the work summarizes the current intelligent computing technological state for aerospace intelligent applications. On this basis, the paper points out that the future development of aerospace intelligent computing technology should take the path of serialization of computing chips, universalization of computing platform and unification of supporting software to create a product form which is equipped with intelligent computing platforms and complete aerospace intelligent basic ecology. The key technical issues to be solved and the development strategy proposed in this paper are of great significance for seizing the opportunity to promote a new round of space industry revolution.
The Click-through rate problem based on feature interaction modeling method has been widely explored and has made great progress. It can alleviate the loss of effective information, but to a certain extent, it depends on the co-occurrence of different features, and there is a problem of feature sparseness. Therefore, in order to solve the problem that the feature representation cannot be learned efficiently because of the few occurrences of the interactive process features, a click-through rate prediction model LGCDFM (LightGCN with DeepFM) based on LightGCN enhanced embedding layer learning is proposed. In the initial embedding layer, a divide-and-conquer learning strategy is adopted. Different types of nodes are distinguished in the graph structure. The information of the same types of nodes is first transmitted to ensure the frequency of features, and then the information of multi-hop neighbors is captured by the interaction between different types of high-order connected nodes. LightGCN structure has powerful feature extraction and representation learning capabilities, and it discards feature transformations and nonlinear activation functions that are not conducive to interaction. It becomes the advantage of collaborative filtering tasks for processing simple user-item interaction data, effectively reducing feature sparsity problem. Finally, it means that the learning layer applies the classic model of click-through rate prediction DeepFM to end-to-end learning high-order and low-order feature combinations, and learns from sparse data by latent vectors to improve the performance of click-through rate prediction tasks. Experiments on two public datasets of Criteo and Avazu show that the performance of this model is better than existing methods in terms of click-through rate prediction and feature sparseness problems.
Relation extraction is an important part of information extraction technology, which aims to extract the relationship between entities from unstructured text. At present, entity relationship extraction based on deep learning has achieved certain results, but its feature extraction is not comprehensive enough, and there is still a large space for improvement in various experimental indicators. Entity relationship extraction is different from other tasks such as natural language classification and entity recognition. It mainly depends on the sentence and the information of two target entities. According to the characteristics of entity relationship extraction, this paper proposes the SEF-BERT model (Fusion Sentence-Entity Features and Bert Model). This model is based on the pre-trained BERT model. After the model is pre-trained by the BERT model, sentence features and entity features are further extracted. Then, the sentence feature and the entity feature are fused, so that the fusion feature vector can have the features of the sentence and two entities at the same time, which enhances the model′s ability to process feature vectors. Finally, the model was trained and tested using the data set of the general field and the data set of the medical field. The experimental results show that, compared with other existing models, the SEF-BERT model has better performance on both data sets.
Quickly and accurately discovering hot topics in massive network data plays an important role in network public opinion monitoring. Aiming at the problem that the K-means algorithm is sensitive to the initial center point selection and the global search ability is insufficient, an improved gray wolf optimization K-means IGWO-KM algorithm based on Hadoop is proposed. First, the algorithm combines the gray wolf optimization algorithm with the K-means algorithm, and takes advantage of the gray wolf optimization algorithm′s fast convergence speed and global optimization for K-means to search for the best clustering center, reducing the random selection of the initial center point The resulting clustering results are unstable to obtain better clustering results. Secondly, use nonlinear convergence factors to improve the gray wolf optimization algorithm, and coordinate the algorithm′s global and local search capabilities. Then, the sine cosine algorithm is introduced and improved to enhance the global search ability of the gray wolf optimization algorithm, optimize the optimization accuracy and convergence speed, and avoid falling into the local optimum. After that, the nearest neighbor space sphere is used to reduce the redundant distance calculation in the K-means clustering process to speed up the algorithm convergence. Finally, the Hadoop cluster can process data in batches to realize the parallelization of algorithms. The experimental results show that the IGWO-KM algorithm has better optimization accuracy and stability. Compared with the GWO-KM algorithm and K-means, the algorithm has significantly improved Precision, Recall and F value, and has good convergence speed and scalability.
In order to solve the problems of loss of key feature information, poor model performance and classification effect due to sparse text features when processing text classification tasks with convolutional neural networks (CNN) and recurrent neural networks (RNN). This paper proposes a text classification model based on multi-channel attention mechanism. Firstly, the vector representation is performed using a form of character and word fusion. Then, using CNN and BiLSTM to extract local features and contextual information of the text. The attention mechanism is used to weight the output information of each channel to highlight the importance of the feature words in the context information. Finally, the output results are fused and the text category probabilities are calculated using softmax. The results of comparative experiments on the data set show that the proposed model has a better classification effect. Compared with the classification effect of the model for a single channel, the F1 values are improved by 1.44% and 1.16%, respectively, which verifies the effectiveness of the proposed model in handling the text classification task. The proposed model complements the shortcomings of CNN and BiLSTM in extracting features, and effectively alleviates the problem of CNN losing word order information and the problem of gradients in BiLSTM processing text sequences. The model can effectively integrate local and global features of text and highlight key information to obtain a more comprehensive text feature, so it is suitable for text classification tasks.
The problem of strongly nonlinear constrained optimization is an extremely difficult topic with comprehensive engineering background in the field of optimization. It is still crucial how to explore effective and efficient optimizers for seeking the global optima of the problem. Therefore, to cope with the difficulty of solving function optimization problems with strongly nonlinear constraints, this work develops a state matrix transition-based improved fly visual evolutionary neural network, by integrating the inspiration of population evolution with the information-processing mechanism of the fly visual system. In the design of the model, the input is a grayscale image which matches with a state matrix at any moment. Each grayscale denotes the object value of a candidate so-called state; an improved fly visual feed forward neural network is designed to not only generate a global learning rate, but also effectively deal with the constraints of the problems, relying upon the property of hierarchical information-processing of the visual system; each state is transformed into another one by a strategy of state transition with the help of the learning rate and the whale's location update strategy. The theoretical analyses show that the computational complexity of the visual evolutionary neural network is decided only by the resolution of each input image. The comparative experiments validate that the neural network has major advantages in optimization quality and important reference value for solving engineering optimization problems.
Question generation is a widely used natural language generation task. Most of the existing researches use sequence-to-sequence model based on recurrent neural network. Due to the "long-term dependence" problem, the encoder can not effectively capture the relationship information between words when modeling sentences. In addition, in the decoding stage, the decoder usually only uses the single-layer or top-layer output of the encoder to calculate the global attention weight, which can not make full use of the syntax and semantic information in sentences. Aiming at the above two defects, a question generation model based on improved attention mechanism is proposed. This model adds the self-attention mechanism to the encoder to extract the relationship information between words, and use the multi-layer outputs of the encoder to jointly calculate the global attention weight when the decoder generating question words. The improved model is tested on the SQuAD dataset. The experimental results show that, compared with the baseline model, the improved model obtains better scores in the two evaluation methods, and through the example analysis, it can be seen that the quality of natural language questions generated by the improved model is higher.
Depth images are playing an increasingly important role in areas such as autonomous driving and three-dimensional measurement. Aiming at the problems of difficulty in accurately repairing the information of holes in depth images and slow filling rate, this paper proposes a progressive depth image based on multi-channel color discrimination. Type filling algorithm. First, set the filter conditions according to the depth image and the color image, accurately filter the pixels in the neighborhood of the hole, and then calculate the bilateral weights of the pixels in the neighborhood in the spatial domain and the value domain, and obtain a two-dimensional filling template with weights for filling At this time, the two-dimensional template is reduced to two perpendicular one-dimensional templates to increase the filling speed and a progressive filling method is used to fill the cavity. The experimental results are qualitatively compared and analyzed subjectively on the public data set. Objectively, the two evaluation indicators of the root mean square error and the peak signal-to-noise ratio are used to accurately analyze the processing effect of the algorithm in this paper. The experimental results show that the method in this paper can better retain the boundary information of the object, effectively prevent the blur of the object edge after filling, the filling result is accurate, and the filling rate is optimized.
Coverage problem is the most important issue in the design of Wireless Sensor Network (WSN), optimizing regional coverage rate as much as possible is a direct means to improve network sensing performance. In view of this, a node deployment optimization scheme based on Reformative Sparrow Search Algorithm (RSSA) is proposed. Firstly, in the search phase, RSSA improves the ergodicity of the algorithm by introducing the sine cosine guidance mechanism to replace the location update mode of the original algorithm. Secondly, the stagnation disturbance mechanism is added to the algorithm by using the characteristics of Lévy random step size, so that RSSA has stronger ability to resist local extremum. At the same time, a more practical probability perception model is used to detect the coverage state of nodes, and a better node set is compared and replaced in the iterative update process, so as to improve the regional coverage. In order to verify the optimization effect of the reformative algorithm, six groups of general benchmark functions are used to test the performance of RSSA, and compared with three different algorithms. The results show that RSSA has good optimization performance. Finally, RSSA is applied to two groups of WSN node deployment optimization examples, and the coverage optimization algorithms in different literatures are compared. Using the proposed algorithm RSSA to optimize node deployment can achieve a maximum coverage of 99.99%, and make the nodes in the region present a uniform distribution. While ensuring high coverage requirements, fewer nodes are used, which reduces node redundancy and reduces the deployment cost of the overall network system.
In the current reasoning methods of knowledge graph, representation learning, especially the TransE series of algorithms based on translation, has achieved excellent performance. Most of the related papers focus on entity reasoning, however, relational reasoning as a key technology for knowledge graph completion deserves attention and research. At the same time, in the knowledge graph with ever-expanding scale and more diversified sources of knowledge, there are many more and complex types of relations, and the frequency of one kind of relations in all triples is further reduced, which increases the difficulty of relational reasoning.Therefore, for the multi-relational knowledge graph, based on the TransE model and focusing on the relational reasoning, a new relationship modeling method is proposed, which can alleviate the problem of multiple relations competing for the same vector in the multi-mapping attribute relations. Then combined with other methods to make the new model feasible in entity reasoning. Through the knowledge reasoning experiments carried out on the public FB15k data set and the Chinese data set extracted from the network, comparing the accuracy of relational reasoning and entity reasoning with similar methods, good results have been achieved and it successfully verify the effectiveness and advancement of the proposed algorithm.
With the rise and continuous development of deep learning, significant progress has been made in the field of visual question answering. At present, most visual question answering models introduce attention mechanisms and related iterative operations to extract the correlation between image regions and high-frequency question word pairs, but The effectiveness of obtaining the spatial semantic association between the image and the question is low, which affects the accuracy of the answer. For this reason, a visual question answering model based on MobileNetV3 network and attention feature fusion is proposed. First, in order to optimize the image feature extraction module, the MobileNetV3 network is introduced and the spatial pyramid pooling structure is added to reduce the computational complexity of the network model while ensuring Model accuracy rate. In addition, the output classifier is improved, and the feature fusion method among them is connected using the attention-based feature fusion method to improve the accuracy of the question and answer. Finally, a comparative experiment is conducted on the public data set VQA 2.0, and the results show that the model proposed in the article is more superior than the current mainstream model.
As the "14th Five-Year Plan" proposes to protect and encourage more high-value patents in the country, the number of innovative patent applications across disciplines and fields has surged, and the demand for automatic patent classification methods to assist manual classification is increasing.At present, the Chinese patent classification is mainly determined by the examiner′s manual matching with the international patent classification system table according to the patent content submitted. The manual efficiency is low, while the existing automatic classification methods mainly extract the text structure features and semantic features from the patents, and directly match the two features with the classification labels of the international patent classification system table.The existing classification methods do not take into account the semantic information of the interpretation text of the classification labels in the international patent classification table, which easily leads to fuzzy classification. Therefore, this paper transforms the traditional text classification problem into a text matching problem based on semantic features.Propose a multi-label and multi-level Chinese patent classification method based on semantic matching to realize the multi-label and multi-level classification task of patent text: extract the semantic features of each label at each level (department, major category, sub-category, major group, and group) from the international patent classification table, and extract it from public patents The semantic features of the text, and the semantic matching between the two, so as to achieve the purpose of automatic classification. A model comparison experiment was conducted on the same data set, and the results showed that the patent classification method based on semantic matching proposed in this paper can achieve better results.
RISC-V is an open reduced instruction set architecture in recent years, and TileLink is a chip-scale interconnect standard designed for RISC-V. In order to use the existing AXI4 IP (Intellectual Property) resources flexibly in RISC-V processors, an efficient bus bridge design scheme between TileLink and AXI4 is proposed. A series of sub-modules match the transaction differences between Tilelink and AXI4, and complete the data transmission across protocols in the form of pipeline transmission, increasing the data throughput of the bus bridge. Different arbitration strategies are used to realize the conversion between different channels of the bus bridge. In the process of AXI4 bus response conversion, fixed-priority arbitration is used to preferentially convert read response, which improves the operating efficiency of the system. In the process of AXI4 bus write and read transactions conversion, round-robin arbitration is used to ensure the fairness of write and read transactions, balance the target channel bandwidth, and improve bus bandwidth utilization and system transmission efficiency. The functions of bus bridge are verified at the module level, by using Tilelink random test vectors. And by mounting the PCI Express root complex of AXI4 interface, the function of bus bridge is verified at the FPGA system level. The results show that the bus bridge can converse protocol correctly and greatly improve the system bandwidth utilization. The bus bridge is implemented in a SMIC 55 nm CMOS process. The frequency is 714 MHz and the area is 405×405
With the continuous growth of electronic design complexity, more and more attention has been paid to the software function verification of chips in the industry. When the traditional test methods and mainstream universal verification methodology (UVM) are adopted in the industry, they are based on test driven software function verification, resulting in strong coupling of components in the verification environment. When carrying out software function verification in multiple environments, It takes a lot of time to design different test components, which is not conducive to the realization of reusability. In this paper, a demand driven functional verification method is proposed. Using many common characteristics between requirements, the verification infrastructure components in multiple verification environments are extracted, and a reference library model of integrated verification infrastructure components is created. All test components are driven by requirements, so as to achieve 100% isolation among requirements, incentives, drivers, tests and tested parts, Create a widely used authentication pattern. Compared with UVM verification methodology, the new method has significant reusability improvement effect. It has the characteristics of independent components, few hierarchical relationships and convenient use, and can meet different needs and verification environment needs. Finally, the test results are analyzed and compared, which shows that the new method can be quickly applied to software function verification in multiple environments and improve work efficiency.
In order to improve the navigation obstacle avoidance ability to the inspection robot, deep learning technology is applied to scene recognition in this paper, and an autonomous obstacle avoidance method of inspection robot based on road scene understanding is proposed. Firstly, a high-precision road scene understanding network is constructed by combining multi-layer convolution with LeakyReLU activation function, and in the way of residual structure and sampling structure on pyramid; Secondly, an adaptive control module is designed to compare the deep feature information of the two images before and after, and automatically control the feature calculation of the subsequent network layer according to the feature difference, so as to avoid repeated extraction of similar features and ensure the network efficiency; Finally, the scene understanding results are transformed into target information in different areas in front of the inspection robot, and the feasible road area in front of the robot and the location of obstacles are analyzed to guide the inspection robot to realize navigation and obstacle avoidance. Experiments show that the proposed method effectively balances the recognition accuracy and calculation efficiency of the scene understanding network. At the same time, on the actual substation inspection robot platform, the method also shows strong adaptability, and can accurately and efficiently provide scene information for the robot and assist the robot to realize real-time autonomous obstacle avoidance.
For the application of 3D image sensor, a transmitter with fast rise/fall time and configurable output current pulse is proposed for array VCSEL (Vertical Cavity Surface Emitting Laser). Without extra bias voltage and discrete components, the output stage is DC coupled. A 4bit programmable equalization circuit implements a different pulse height or width of the equalizing current, improving the rise time of the current, the integrity of the signal and accuracy of the time of flight; a 4bit current digital-to-analog converter controls output currents to achieve different output optical power. The pulse generator is composed of delay units, selectors and triggers, outputting pulse signals of different widths, realizing different average current. The transmitter is realized with CMOS 65 nm process, and the power supply voltage is 3.3 V. The post-simulation results show that the output current can reach 100~500 mA. When the pulse frequency is 10 MHz, the pulse signal is adjustable from 1.09 to 17.38 ns whose rise time is 270 ps and fall time is 90 ps. The pulse signal of the equalization circuit can be adjusted from 220ps~3.48ns, and the average power of the maximum output current is 0.15W.
- 1Improved fusion method based on ambient illumination condition for multispectral pedestrian detection
- 2Semantic segmentation algorithm based on separable dilated convolution and joint normalization method
- 3Restoration of minimum cascade mobile for wireless sensor networks
- 4The improved polar decoder method of physical downlink control channel
- 5A new stage for microsystem integration-the integrated development of integrated circuits chips and system-level electronic packaging
- 6Design of RISC-V processor based on Chisel
- 7Research on cloud access control technology based on consortium blockchains
- 8Improved multi-scale edge detection method based on HED
- 9Cross-corpus speech emotion recognition based on adversarial training
- 10Head detection algorithm based on improved FaceBoxes
- 1An improved particle swarm optimization algorithm for adaptive inertial weights
- 2Task Scheduling Algorithm Based on Load Balancing Ant Colony Optimization in Cloud Computing
- 3Research Image Mosaic Algorithm Based on Improved SIFT Feature Matching
- 4K-means Optimal Clustering Number Determination Method Based on Clustering Center Optimization
- 5An Module Level Reusable Randomization Verification Platform Based On UVM
- 6A Method of Ellipse Fitting Based on Total Least Squares
- 7The Level of K-means Clustering Algorithm Based on the Minimum Spanning Tree
- 8Research of Vertical Reuse Based on UVM
- 9Improved BP Neural Network Based on Simulated Annealing
- 10An Improved Apriori Algorithm Based Matrix and Weight